I am running google cloud machine learning beta - and use the hypertune setup with tensorflow.
In some of the sub runs of hyperparameter tuning I have losses becoming NaNs - and that crashes the computations - which in turns stop the hyperparameter tuning job.
Error reported to Coordinator: <class 'tensorflow.python.framework.errors.InvalidArgumentError'>,
Nan in summary histogram for: softmax_linear/HistogramSummary [[Node: softmax_linear/HistogramSummary = HistogramSummary[T=DT_FLOAT,
_device="/job:master/replica:0/task:0/cpu:0"]
(softmax_linear/HistogramSummary/tag, softmax_linear/softmax_linear)]]
Caused by op u'softmax_linear/HistogramSummary', defined at: File
"/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
What is the canonical way of handling these ? Should I protect the loss function ?
Thanks