4
votes

I try writing a custom binary cross-entropy loss function. This is my script:

def my_custom_loss(y_true,y_pred):
    t_loss = (-1)*(y_true * K.log(y_pred) + (1 - y_true) * K.log(1 - y_pred))
    return K.mean(t_loss)

When I run my script using this loss function, after few iterations, I get NaN as output for loss function.

Then I looked at TensorFlow documentation, I modified the loss function into the following:

 t_loss = K.max(y_pred,0)-y_pred * y_true + K.log(1+K.exp((-1)*K.abs(y_pred)))

The code runs without any issue. I would like to know if someone could provide some explanation why my first loss function gives a NaN output.

Binary Cross-Entropy: y * log(p) + (1-y) * log(1-p)

I have sigmoid function as activation for my last layer. So the value of 'p' should be between 0 and 1. Log should exist for this range.

Thank you.

1

1 Answers

6
votes

A naive implementation of Binary Cross Entropy will suffer numerical problem on 0 output or larger than one output, eg log(0) -> NaN. The formula you posted is reformulated to ensure stability and avoid underflow. The following deduction is from tf.nn.sigmoid_cross_entropy_with_logits.

z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
= z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
= z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
= (1 - z) * x + log(1 + exp(-x))
= x - x * z + log(1 + exp(-x))

For x < 0, to avoid overflow in exp(-x), we reformulate the above

x - x * z + log(1 + exp(-x))
= log(exp(x)) - x * z + log(1 + exp(-x))
= - x * z + log(1 + exp(x))

And the implementation use the equivalient form:

max(x, 0) - x * z + log(1 + exp(-abs(x)))