0
votes

In Andrew Ng's machine learning course it is recommended that you plot the learning curve (training set size vs cost) to determine if your model has a high bias or variance.

However, I am training my model using Tensorflow and see that my validation loss is increasing while my training loss is decreasing. It's my understanding that this means my model is overfitting and so I have high variance. Is there still a reason to plot the learning curve?

1

1 Answers

1
votes

Yes, there is, but it's not for spotting overfitting only. But anyway, plotting is just fancy way to see numbers, and sometimes it gives you insights. If you are monitoring loss on train/validation simultaneously – you're looking at same data, obviously.

Regarding Andrew's ideas – I suggest looking into Deep Learning course by him, he clarifies that in modern applications (DL + a lot of data, and I believe, this is your case) bias is not an opposite of variance.