8
votes

My program will face this some times(not every run will face this..), then if face this I can always reproduce this error loading from the last model I have saved before program crash due to nan. When rerun from this model, first train process seems fine using the model to generate loss(I have printed loss and shows no problem), but after applying gradients, the values of embedding variables will turn to Nan.

So what is the root cause of the nan problem? Confused as not know how to debug further and this program with same data and params will mostly run ok and only face this problem during some run..

Loading existing model from: /home/gezi/temp/image-caption//model.flickr.rnn2.nan/model.ckpt-18000
Train from restored model: /home/gezi/temp/image-caption//model.flickr.rnn2.nan/model.ckpt-18000
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:245] PoolAllocator: After 5235 get requests, put_count=4729 evicted_count=1000 eviction_rate=0.211461 and unsatisfied allocation rate=0.306781
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:257] Raising pool_size_limit_ from 100 to 110
2016-10-04 21:45:39 epoch:1.87 train_step:18001 duration:0.947 elapsed:0.947 train_avg_metrics:['loss:0.527']  ['loss:0.527']
2016-10-04 21:45:39 epoch:1.87 eval_step: 18001 duration:0.001 elapsed:0.948 ratio:0.001
W tensorflow/core/framework/op_kernel.cc:968] Invalid argument: Nan in summary histogram for: rnn/HistogramSummary_1
     [[Node: rnn/HistogramSummary_1 = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](rnn/HistogramSummary_1/tag, rnn/image_text_sim/image_mlp/w_h/read/_309)]]
W tensorflow/core/framework/op_kernel.cc:968] Invalid argument: Nan in summary histogram for: rnn/HistogramSummary_1
     [[Node: rnn/HistogramSummary_1 = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](rnn/HistogramSummary_1/tag, rnn/image_text_sim/image_mlp/w_h/read/_309)]]
W tensorflow/core/framework/op_kernel.cc:968] Invalid argument: Nan in summary histogram for: rnn/HistogramSummary_1
     [[Node: rnn/HistogramSummary_1 = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](rnn/HistogramSummary_1/tag, rnn/image_text_sim/image_mlp/w_h/read/_309)]]
W tensorflow/core/framework/op_kernel.cc:968] Invalid argument: Nan in summary histogram for: rnn/HistogramSummary_1
     [[Node: rnn/HistogramSummary_1 = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](rnn/HistogramSummary_1/tag, rnn/image_text_sim/image_mlp/w_h/read/_309)]]
W tensorflow/core/framework/op_kernel.cc:968] Invalid argument: Nan in summary histogram for: rnn/HistogramSummary_1
     [[Node: rnn/HistogramSummary_1 = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](rnn/HistogramSummary_1/tag, rnn/image_text_sim/image_mlp/w_h/read/_309)]]
Traceback (most recent call last):
  File "./train.py", line 308, in <module>
    tf.app.run()
6

6 Answers

16
votes

It happens sometimes during the initial iterations of training that the model might spew out only a single prediction class. If out of random chance, the class turned out to be 0 for all the training examples then there can exist a NaN value for Categorical Cross Entropy Loss.

Make sure that you introduce a small value when computing the loss such as tf.log(predictions + 1e-8). This will help in overcoming this numerical instability.

10
votes

Usually NaN is a sign of model instability, for example, exploding gradients. It may be unnoticed, loss would just stop shrinking. Trying to log weights summary makes the problem explicit. I suggest you to reduce the learning rate as a first measure. If it wouldn't help, post your code here. Without seeing it it's hard suggest anything more specific.

4
votes

I got similar error, and tried different learning rates, batch sizes, loss functions, and model architectures without any luck. But then I noticed that I can train my model just fine if I'm not using tensorboard callback. Looks like "Nan in summary histogram" refers to saving model weights histogram, which somehow makes those Nans explicit. Turning off histograms in tensorboard callback solved the issue for me: tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=0)

1
votes

I had a similar problem and in my case, I changed activation from tf.nn.relu to tf.nn.sigmoid and it worked. I hope this would help.

0
votes

I believe it has something to do with your system running out of memory. This especially seems to be the problem if you get the error after a certain number of steps.

Setting train to false in batch_norm (within your pipeline.config file) appears to overcome this problem.

It should look something like this:

batch_norm {
      decay: 0.999
      center: true
      scale: true
      epsilon: 0.001
      train: false
    }

Delete the training directory (logdir) and start training it again. Resuming from a recent checkpoint will result in the same error.

Hope this helped.

0
votes

If you are using tensorflow.keras.layers.Masking and one or more input features happen to be masked for all inputs in a batch, then you can get this error.

Similar to najeeb khan's case, but triggered differently.

This makes sense because when tensorflow calls _log_weights from on_epoch_end, some weights related to the input features that were always masked are actually still NaN.

For me the solution was to explicitly load weights (via tensorflow.keras.models.load_model)