I have a deep network of only fully connected/dense layers with the shape 128-256-512-1024-1024 all layers use LeakyReLU
activation, with no dropout
and the final layer has a softmax
activation.
During training after the 20th epoch the validation/test loss starts to reverse and go up but the test accuracy continues to increase also. How does this make sense? And is the test accuracy actually accurate if it were shown new data or is there some kind of false positive going on here?
I compiled the model like so:
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['categorical_accuracy']
)
Graphs of my train/test accuracy and loss curves:
Edit:
This may help. It's the true labels plotted against the predicted labels for the last epoch: