36
votes

I want to compute the pairwise square distance of a batch of feature in Tensorflow. I have a simple implementation using + and * operations by tiling the original tensor :

def pairwise_l2_norm2(x, y, scope=None):
    with tf.op_scope([x, y], scope, 'pairwise_l2_norm2'):
        size_x = tf.shape(x)[0]
        size_y = tf.shape(y)[0]
        xx = tf.expand_dims(x, -1)
        xx = tf.tile(xx, tf.pack([1, 1, size_y]))

        yy = tf.expand_dims(y, -1)
        yy = tf.tile(yy, tf.pack([1, 1, size_x]))
        yy = tf.transpose(yy, perm=[2, 1, 0])

        diff = tf.sub(xx, yy)
        square_diff = tf.square(diff)

        square_dist = tf.reduce_sum(square_diff, 1)

        return square_dist

This function takes as input two matrices of size (m,d) and (n,d) and compute the squared distance between each row vector. The output is a matrix of size (m,n) with element 'd_ij = dist(x_i, y_j)'.

The problem is that I have a large batch and high dim features 'm, n, d' replicating the tensor consume a lot of memory. I'm looking for another way to implement this without increasing the memory usage and just only store the final distance tensor. Kind of double looping the original tensor.

4
It's not clear that your code is doing 'pairwise distance of a batch of feature`. Can you specify the function you want to do more formally? Also, have you considered tf.squared_difference - keveman
I update the question to explain this. If you put a batch of features as input of this function it should compute distance between its rows. - jrabary

4 Answers

67
votes

You can use some linear algebra to turn it into matrix ops. Note that what you need matrix D where a[i] is the ith row of your original matrix and

D[i,j] = (a[i]-a[j])(a[i]-a[j])'

You can rewrite that into

D[i,j] = r[i] - 2 a[i]a[j]' + r[j]

Where r[i] is squared norm of ith row of the original matrix.

In a system that supports standard broadcasting rules you can treat r as a column vector and write D as

D = r - 2 A A' + r'

In TensorFlow you could write this as

A = tf.constant([[1, 1], [2, 2], [3, 3]])
r = tf.reduce_sum(A*A, 1)

# turn r into column vector
r = tf.reshape(r, [-1, 1])
D = r - 2*tf.matmul(A, tf.transpose(A)) + tf.transpose(r)
sess = tf.Session()
sess.run(D)

result

array([[0, 2, 8],
       [2, 0, 2],
       [8, 2, 0]], dtype=int32)
19
votes

Using squared_difference:

def squared_dist(A): 
    expanded_a = tf.expand_dims(A, 1)
    expanded_b = tf.expand_dims(A, 0)
    distances = tf.reduce_sum(tf.squared_difference(expanded_a, expanded_b), 2)
    return distances

One thing I noticed is that this solution using tf.squared_difference gives me out of memory (OOM) for very large vectors, while the approach by @YaroslavBulatov doesn't. So, I think decomposing the operation yields a smaller memory footprint (which I thought squared_difference would handle better under the hood).

8
votes

Here is a more general solution for two tensors of coordinates A and B:

def squared_dist(A, B):
  assert A.shape.as_list() == B.shape.as_list()

  row_norms_A = tf.reduce_sum(tf.square(A), axis=1)
  row_norms_A = tf.reshape(row_norms_A, [-1, 1])  # Column vector.

  row_norms_B = tf.reduce_sum(tf.square(B), axis=1)
  row_norms_B = tf.reshape(row_norms_B, [1, -1])  # Row vector.

  return row_norms_A - 2 * tf.matmul(A, tf.transpose(B)) + row_norms_B

Note that this is the square distance. If you want to change this to the Euclidean distance, perform a tf.sqrt on the result. If you want to do that, don't forget to add a small constant to compensate for the floating point instabilities: dist = tf.sqrt(squared_dist(A, B) + 1e-6).

0
votes

If you want compute other method , then change the order of the tf modules.

def compute_euclidean_distance(x, y):
    size_x = x.shape.dims[0]
    size_y = y.shape.dims[0]
    for i in range(size_x):
        tile_one = tf.reshape(tf.tile(x[i], [size_y]), [size_y, -1])
        eu_one = tf.expand_dims(tf.sqrt(tf.reduce_sum(tf.pow(tf.subtract(tile_one, y), 2), axis=1)), axis=0)
        if i == 0:
            d = eu_one
        else:
            d = tf.concat([d, eu_one], axis=0)
return d