0
votes

I am have a fairly large dataset that I store in HDF5 and access using PyTables. One operation I need to do on this dataset are pairwise comparisons between each of the elements. This requires 2 loops, one to iterate over each element, and an inner loop to iterate over every other element. This operation thus looks at N(N-1)/2 comparisons.

For fairly small sets I found it to be faster to dump the contents into a multdimensional numpy array and then do my iteration. I run into problems with large sets because of memory issues and need to access each element of the dataset at run time.

Putting the elements into an array gives me about 600 comparisons per second, while operating on hdf5 data itself gives me about 300 comparisons per second.

Is there a way to speed this process up?

Example follows (this is not my real code, just an example):

Small Set:

with tb.openFile(h5_file, 'r') as f:
    data = f.root.data

    N_elements = len(data)
    elements = np.empty((N_elements, 1e5))

    for ii, d in enumerate(data):
        elements[ii] = data['element']

D = np.empty((N_elements, N_elements))  
for ii in xrange(N_elements):
    for jj in xrange(ii+1, N_elements):             
        D[ii, jj] = compare(elements[ii], elements[jj])

Large Set:

with tb.openFile(h5_file, 'r') as f:
    data = f.root.data

    N_elements = len(data)        

    D = np.empty((N_elements, N_elements))  
    for ii in xrange(N_elements):
        for jj in xrange(ii+1, N_elements):             
             D[ii, jj] = compare(data['element'][ii], data['element'][jj])
1

1 Answers

0
votes

Two approaches I'd suggest here:

  1. numpy memmap: Create a memory mapped array, put the data inside this and then run code for "Small Set". Memory maps behave almost like arrays.

  2. Use multiprocessing-module to allow parallel processing: if the "compare" method consumes at least a noticeable amount of CPU time, you could use more than one process.

Assuming you have more than one core in the CPU, this will speed up significantly. Use

  • one process to read the data from the hdf and put in into a queue
  • one process to grab from the queue and do the comparisson and put some result to "output-queue"
  • one process to collect the results again.

Before choosing the way: "Know your enemy", i.e., use profiling! Optimizations are only worth the effort if you improve at the bottlenecks, so first find out which methods consume you precious CPU time.

Your algorithm is O(n^2), which is not good for large data. Don't you see any chance to reduce this, e.g., by applying some logic? This is always the best approach.

Greetings,

Thorsten