4
votes

I have two numpy arrays, X and Y, with shapes (n,d) and (m,d), respectively. Assume that we want to compute the Euclidean distances between each row of X and each row of Y and store the result in array Z with shape (n,m). I have two implementations for this. The first implementation uses two for loops as follows:

for i in range(n):
      for j in range(m):
        Z[i,j] = np.sqrt(np.sum(np.square(X[i] - Y[j])))

The second implementation uses only one loop by vectorization:

for i in range(n):
      Z[i] = np.sqrt(np.sum(np.square(X[i]-Y), axis=1))

When I run these codes on a particular X and Y data, the first implementation takes nearly 30 seconds while the second implementation takes nearly 60 seconds. I expect the second implementation to be faster since it uses vectorization. What is the reason of its slow running? I know that we can obtain faster implementations by fully vectorizing the code, but I don't understand why the second code (which is partially vectorized) is slower than non-vectorized version.

Here is the complete code:

n,m,d = 5000,500,3000
X = np.random.rand(n,d)
Y = np.random.rand(m,d)
Z = np.zeros((n,m))

tic = time.time()
for i in range(n):
      for j in range(m):
        Z[i,j] = np.sqrt(np.sum(np.square(X[i] - Y[j])))
print('Elapsed time 1: ', time.time()-tic)

tic = time.time()
for i in range(n):
      Z[i] = np.sqrt(np.sum(np.square(X[i]-Y), axis=1))
print('Elapsed time 2: ', time.time()-tic)


tic = time.time()
train_squared = np.square(X).sum(axis=1).reshape((1,n))
test_squared = np.square(Y).sum(axis=1).reshape((m,1))
test_train = -2*np.matmul(Y, X.T)
dists = np.sqrt(test_train + train_squared + test_squared)
print('Elapsed time 3: ', time.time()-tic)

And this is the output:

Elapsed time 1:  35.659096002578735
Elapsed time 2:  65.57051086425781
Elapsed time 3:  0.3912069797515869
1
What are the typical values of n,m,d in your testing?Divakar
@Divakar n=5000, m=500, d=3000Hossein
For me, first took 24 sec, second 20 sec. Share your benchmarking setup?Divakar
@Divakar Even with the random data, I get the same result. I updated the question to include the complete code and outputs.Hossein
@Hossein: The first version uses temporary arrays of shape (d,); the second version uses temporaries of shape (m, d), much bigger. I don't know what your fully vectorized solution looks like, so I don't know what size of temporaries that's producing.user2357112 supports Monica

1 Answers

6
votes

I pulled apart your equations and reduced it down to this MVCE:

for i in range(n):
    for j in range(m):
        Y[j].copy()

for i in range(n):
    Y.copy()

The copy() here is just to simulate the subtraction from X. The subtraction itself should be quite cheap.

Here's the results on my computer:

  • The first one took 10ms.
  • The second one took 13s!

I'm copying the exact same amount of data. Using your choices n=5000, m=500, d=3000, this code is copying 60 gigabytes of data.

To be honest, I'm not surprised at all that 13 seconds. That's already over 4GB/s, essentially the maximum bandwidth between my CPU and RAM (of e.g. memcpy).

The really surprising thing is that the first test managed to copy 60GB in only 0.01seconds, which translates to 6TB/s!

I'm pretty sure this is because the data isn't actually leaving the CPU at all. It's just bouncing back and forth between the CPU and the L1 cache: an array of 3000 double-precision numbers will easily fit in a 32KiB L1 cache.

Therefore, I deduce that the main reason your second algorithm isn't as great as one would naively expect is because processing a whole chunk of 500×3000 elements per iteration is very unfriendly to the CPU cache: you basically evict the whole cache into RAM! In contrast, your first algorithm is does take advantage of cache to some extent, because the 3000 elements will still be in cache by the time the sum gets computed, so there's not nearly as much data moving between your CPU and RAM. (Once you have the sum, the 3000 element array is "thrown away", which means it will probably just get overwritten in cache and never make it back to the actually RAM.)


Naturally, doing matrix multiplication is insanely faster, because your problem is essentially of the form:

C[i, j] = ∑[k] f(A[i, k], B[j, k])

If you replace f(x, y) with x * y, you can see it's just a variant of matrix multiplication. The operation f is not extremely important here − what is important are how the indices behave in this equation, which determines how your arrays are stored in memory. The essence of matrix multiplication algorithms lies in the ability to cope with this kind of array access through blocking, so in principle the overall algorithm does not change dramatically even for a user-defined f. Unfortunately, in practice there are very few libraries that support user-defined operations, so you have use the trick (X - Y)**2 = X**2 - 2 X Y + Y**2 as you have done. But it gets the job done :D