I am using ImageDataGenerator(validation_split).flow_from_directory(subset) for my training and validation sets. So the training and validation data get their own generators.
After training my data, I run model.evaluate() on my validation generator and got about 75% accuracy. However, when I run model.predict() on that same validation generator, the accuracy falls to 1%.
The model is a multiclass CNN compiled on categorical crossentropy loss and accuracy metrics, which should default to categorical accuracy. # Edit: changed to categorical accuracy anyways.
# Compile
learning_rate = tf.keras.optimizers.schedules.PolynomialDecay(initial_learning_rate=initial_lr,
decay_steps=steps,
end_learning_rate=end_lr)
model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate),
loss='categorical_crossentropy',
metrics=['categorical_accuracy'])
# Validation set evaluation
val_loss, val_accuracy = model.evaluate(val_generator,
steps=int(val_size/bs)+1)
print('Accuracy: {}'.format(val_accuracy))
# Validation set predict
y_val = val_generator.classes
pred = model.predict(val_generator,
verbose=1
steps=int(val_size/bs)+1)
accuracy_TTA = np.mean(np.equal(y_val, np.argmax(pred, axis=-1)))
print('Accuracy: {}'.format(accuracy_TTA))
model.compile()
statement, in particular, the losses and metrics – strider0160y_val
and the images used in themodel.predict
line up correctly? – M Z