2
votes

I've implemented the Neural Network using Tensorflow. During the implementation and training, I've found several not-so-trivial bugs. Example: during the training I had same Mini-Batch loss for different steps/epochs, but different accuracy.

Now the neural network seems to be ready and working properly. I haven't managed to train it well yet, but I am working on it.

Anyway, I would like to check somehow that I haven't done any computational errors there. I am thinking about generating some artificial data for "fake" classification problem with lets say 4 features. The classification should have a very clear human-understandable dependency between the classification output and 4 features. The idea is to try to train the NN on it and see how it performs.

What do you think?

1

1 Answers

0
votes

Stanford's c231n has a couple of general tips for this, like gradient checking.

If you're just learning neural networks, why don't you try to run your implementation on some known data? Many courses provide error and loss curves form models with specified hyperparameters, so you can check whether your implementation's behavior differs significantly from correct implementation.