3
votes
def charbonnier(I_x, I_y, I_t, U, V, e):
    loss = K.sqrt(K.pow((U*I_x + V*I_y + I_t), 2) + e)
    return K.sum(loss)

I would like to use this cost function and optimise it for U and V. I am currently struggling to get it working with Keras, since Keras loss functions can only have the form f(y_true, y_pred).

loss

My model is completely unsupervised and I have no ground-truth. I_x, I_y and I_t are constants, and the goal of the model is to learn the U and V which minimises E(F). So my question is: What is the correct way to implement this kind of loss function (which does not have the form f(y_true, y_pred)), in Keras?

3

3 Answers

2
votes

Define the loss function as follows:

def charbonnier(I_x, I_y, I_t, U, V, e)
    def loss_fun(y_true, y_pred):
        loss = K.sqrt(K.pow((U*I_x + V*I_y + I_t), 2) + e)
        return K.sum(loss)
    return loss_fun
0
votes

I think you may need to modify the code to adjust the criterion of keras. if the network is unsupervised, you can let the y_pred == y_ture like autoencoders.

0
votes

As you have discovered, Keras losses used in the tf.keras.models.Model.compile method are geared towards supervised learning. For unsupervised learning (excluding self-supervised learning such as VAEs and SimCLR), a good solution would be to add the loss directly as a tensor to your model. For example,

inp = tf.keras.layers.Input(...)
u = ...(inp)
v = ...(inp)
model = tf.keras.models.Model(inp, [u,v])

charbonnier_loss_tensor = tf.reduce_sum(tf.sqrt(tf.pow((U*I_x + V*I_y + I_t), 2) + e))
model.add_loss(charbonnier_loss_tensor)