I'm trying to write a custom loss function in Keras for a CNN I'm working on. Y_true and Y_pred will both be tensors of grayscale images, so I expect a shape of [a, x, y, 1]
, where x and y are the dimensions of my images and a is the batch size.
The plan is to:
- Threshold each image of Y_true by its mean pixel intensity
- Use the non-zero elements of this mask to get an array of pixel values from Y_true and Y_pred
- Measure the cosine similarity (using the built-in Keras loss function) of these arrays and return the average result of the batch as the loss
My main question is how I can efficiently implement this process?
Does the cosine_similarity
function work on 1D arrays?
I know that I should avoid for loops to maintain efficiency but it's the only way I can think of implementing this function. Is there a more efficient way to implement this function using the Keras backend or numpy?
EDIT
Basic implementation and an unexpected error when compiling the model with this function:
def masked_cosine_similarity(y_true, y_pred):
loss = 0
for i in range(y_true.shape[0]):
true_y = y_true[i,:,:,0]
pred_y = y_pred[i,:,:,0]
mask = true_y > np.mean(true_y)
elements = np.nonzero(mask)
true_vals = np.array([true_y[x,y] for x, y in zip(elements[0], elements[1])])
pred_vals = np.array([pred_y[x,y] for x, y in zip(elements[0], elements[1])])
loss += cosine_similarity(true_vals, pred_vals)
return loss / y_true.shape[0]
Error message:
64 loss = 0
---> 65 for i in range(y_true.shape[0]):
66 true_y = y_true[i,:,:,0]
67 pred_y = y_pred[i,:,:,0]
TypeError: 'NoneType' object cannot be interpreted as an integer