Let's say I have 3 classes, and each sample can belong to any of those classes. The labels look like this.
[
[1 0 0]
[0 1 0]
[0 0 1]
[1 1 0]
[1 0 1]
[0 1 1]
[1 1 1]
]
I set my output as Dense(3, activation="sigmoid"), and I compiled with optimizer="adam", loss="binary_crossentropy". I guet 0.05 for loss, and 0.98 for accuracy, according to Keras output.
I thought I would get only 1 or 0 for prediction values if I use sigmoid and binary_crossentropy. However, model.predict(training-features) gave me values between 1 and 0 like 0.0026. I have tried all 4 combinations between categorical_crossentropy and binary_crossentropy with sigmoid and softmax. Model.predict always returns a value between 0 and 1 with shape of n_samples by n_classes. It would be 7x3 in the example above.
Then I clipped the values at 0.5 like below and checked accuracy_score(training_labels, preds). The score dropped to 0.1.
preds[preds>=0.5] = 1
preds[preds<0.5] = 0
I'd appreciate if someone could give me some guidance on how I should approach this problem.
Thanks!