Did anyoe have a convincing solution to make custom_binarycrossentropy work?
I tried all possible methods (even making the whole training data size same as the bacth size to eliminate the dependence on global averaging during batch wise processing.). But i see significant difference between my binary cross entropy implementation and the one from keras ( by specifying loss = 'binary_crossentropy')
My crustom binary cross entropy code is as follows
def _loss_tensor(y_true, y_pred):
y_pred = K.clip(y_pred, _EPSILON, 1.0-_EPSILON)
out = (y_true * K.log(y_pred) + (1.0 - y_true) * K.log(1.0 - y_pred))
return -K.mean(out)
def _loss_tensor2(y_true, y_pred):
y_pred = K.clip(y_pred, _EPSILON, 1.0-_EPSILON)
out = -(y_true * K.log(y_pred) + -(1.0 - y_true) * K.log(1.0 - y_pred))
return out
def _loss_tensor2(y_true, y_pred):
loss1 = K.binary_crossentropy(y_true, y_pred)
return loss1
None of these methods work. It doesnt work even if i do K.mean() before ir eturn the results from custom loss function.
I am not able to understand what special does using loss = 'binary_crossentropy' does. When i use my custom loss function , the training sucks and it does work as expected.
I need my custom loss function to manipulate the loss function depending on the error and penalizing a certain type of classification error more.