I'm using sigmoid and binary_crossentropy for multi-label classification. A very similar question asked here. And the following custom metric was suggested:
from keras import backend as K
def full_multi_label_metric(y_true, y_pred):
comp = K.equal(y_true, K.round(y_pred))
return K.cast(K.all(comp, axis=-1), K.floatx())
But I do not want to use all() because for one single sample with a true label of [1, 0, 0, 1, 1] and a predicted label of [0, 0, 0, 1, 1] I do not consider the prediction accuracy as zero (due to the the fact that the labels for the last four classes have been predicted correctly).
Here is my model:
# expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(Masking(mask_value=-9999, input_shape=(197, 203)))
model.add(LSTM(512, return_sequences=True))
model.add(Dense(20, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=SGD(lr=1e-3, decay=1e-4, momentum=0.9, nesterov=True),
metrics = ['accuracy'])
print(model.summary())
Here is my y_pred for one example:
pred = model.predict(X_test)
y_pred = pred[0,196,:]
y_pred
array([2.6081860e-01, 9.9079555e-01, 1.4816311e-01, 8.6009043e-01,
2.6759505e-04, 3.0792636e-01, 2.6738405e-02, 8.5339689e-01,
5.1105350e-02, 1.5427300e-01, 6.7039116e-05, 1.7909735e-02,
6.4140558e-04, 3.5133284e-01, 5.3054303e-02, 1.2765944e-01,
2.9298663e-04, 6.3041472e-01, 5.8620870e-03, 5.9656668e-01],
dtype=float32)
Here is my y_true for one example:
y_true = Y_test[0,0,:]
y_true
array([1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 1., 0., 0.,
0., 0., 1.])
My question is: How can I set a Keras custom metric function so that each element in y_pred should be compared to the each element in y_true, then an accuracy measure will be given during training? I want to use this metric in metrics = [X])
?