I have a neural network which does image segmentation. I trained it ~100 epochs. The current effect is that the validation loss is constant (0.2 +/- 0.03
) and the training accuracy is still decreasing (currently 0.07
), but very slow.
The result of the neural network is quite well. What does this mean? Is it overfitting? Should i stop the training?
I currently use dropout in the first layer (50%
). Would it make sense to add dropout to every layer (there are about ~15 layers)? Or should i also add L2
regularization? Does it make sense to use L2
and also droput?
Thank you very much