2
votes

I have coded a neural network that returns a list of 3 numbers for every input sample. These values are then subtracted from the actual values to get the difference.

For example,

actual    = [1,2,3]  
predicted = [0,0,1]  
diff      = [1,2,2] 

So my tensor now has the shape [batch_size, 3] What I want to do is to iterate over the tensor elements to construct my loss function.
For instance, if my batch_size is 2 and finally

diff = [[a,b,c],[d,e,f]] 

I want the loss to be

Loss = mean(sqrt(a^2+b^2+c^2), sqrt(d^2+e^2+f^2))  

I know that TensorFlow has a tf.nn.l2_loss() function that computes the L2 loss of the entire tensor. But what I want is the mean of l2 losses of elements of a tensor along some axis.
How do I go about doing this?

1
here's a pro tip, build your loss function on dummy data using numpy functions like np.mean, np.sum, etc, with axis argument to operate over batches, eg: np.mean(array, axis=1). Then swap for the equivalent tensorflow functions as in Franck's answer.vega
Will keep in mind. Thanks!phoenixwing

1 Answers

1
votes

You can use tf.sqrt followed by tf.reduce_sum and tf.reduce_mean. Both tf.reduce_sum and tf.reduce_mean have an axis argument that indicates which dimensions to reduce.

For more reduction operations, see https://www.tensorflow.org/api_guides/python/math_ops#Reduction