1
votes

I am trying to write a custom loss function for U-net in Keras and the objective of it is to compute not only the mean square error (MSE) of the predicted image and the true image, but also the MSE of their gradients.

I am not sure if this is normal, but the shape of y_true in my customized loss function is (None, None, None, None) even though from the following link, I am expecting the size of y_true to be identical to y_pred and in my case, it should have a size of: (batch_size, 128, 256, 3).

I have listed the code I wrote for the custom loss function and I would truly appreciate it if anyone could give any suggestions.

import tensorflow.keras.backend as K
# Encouraging the predicted image to match the label not only in image domain, but also in gradient domain
def keras_customized_loss(batch_size, lambda1 = 1.0, lambda2 = 0.05):
    def grad_x(image):
        out = K.zeros((batch_size,)+image.shape[1:4])
        out = K.abs(image[0:batch_size, 1:, :, :] - image[0:batch_size, :-1, :, :])
        return out

    def grad_y(image):
        out = K.zeros((batch_size,)+image.shape[1:4])
        out = K.abs(image[0:batch_size, :, 1:, :] - image[0:batch_size, :, :-1, :])
        return out

    #OBS: Now y_true has size: (None, None, None, None), figure out how to solve it
    def compute_loss(y_true, y_pred):
        pred_grad_x = grad_x(y_pred)
        pred_grad_y = grad_y(y_pred)
        true_grad_x = grad_x(y_true)
        true_grad_y = grad_y(y_true)
        loss1 = K.mean(K.square(y_pred-y_true)) 
        loss2 = K.mean(K.square(pred_grad_x-true_grad_x))
        loss3 = K.mean(K.square(pred_grad_y-true_grad_y))

        return (lambda1*loss1+lambda2*loss2+lambda2*loss3)

    return compute_loss

model.compile(optimizer='adam', loss = keras_customized_loss(BATCH_SIZE), metrics=['MeanAbsoluteError'])
1

1 Answers

0
votes

None means it accepts variable sizes.
So your custom loss can be very flexible.

The real size will naturally be the size of the batch of data you pass to fit.
If your data has shape (samples, 128,256,3) you have nothing to worry.

But you have a lot of unnecessary things in your code, you can just:

def keras_customized_loss(lambda1 = 1.0, lambda2 = 0.05):
    def grad_x(image):
        return K.abs(image[:, 1:] - image[:, :-1])

    def grad_y(image):
        return K.abs(image[:, :, 1:] - image[:, :, :-1])

    def compute_loss(y_true, y_pred):
        pred_grad_x = grad_x(y_pred)
        pred_grad_y = grad_y(y_pred)
        true_grad_x = grad_x(y_true)
        true_grad_y = grad_y(y_true)
        loss1 = K.mean(K.square(y_pred-y_true)) 
        loss2 = K.mean(K.square(pred_grad_x-true_grad_x))
        loss3 = K.mean(K.square(pred_grad_y-true_grad_y))

        return (lambda1*loss1+lambda2*loss2+lambda2*loss3)

    return compute_loss