In Andrew Ng's machine learning course it is recommended that you plot the learning curve (training set size vs cost) to determine if your model has a high bias or variance.
However, I am training my model using Tensorflow and see that my validation loss is increasing while my training loss is decreasing. It's my understanding that this means my model is overfitting and so I have high variance. Is there still a reason to plot the learning curve?