I am following the flowers tutorial for re-training inception on google cloud ml. I can run the tutorial, train, predict, just fine.
I then substituted the flowers dataset for a test dataset of my own. Optical character recognition of image digits.
When training the model I receive the error:
Invalid argument: Received a label value of 13 which is outside the valid range of [0, 6). Label values: 6 3 2 7 3 7 6 6 12 6 5 2 3 6 8 8 8 8 4 6 5 13 7 4 8 12 5 2 4 12 12 8 8 8 12 6 4 2 12 4 3 8 2 6 8 12 2 8 4 6 2 4 12 5 5 7 6 2 2 3 2 8 2 5 2 8 2 7 4 12 8 4 2 4 8 2 2 8 2 8 7 6 8 3 5 5 5 8 8 2 5 3 9 8 5 8 3 2 5 4
The format of training and eval datasets looks like this:
root@e925cd9502c0:~/MeerkatReader/cloudML# head training_dataGCS.csv
gs://api-project-773889352370-ml/TrainingData/0_2.jpg,H
gs://api-project-773889352370-ml/TrainingData/0_4.jpg,One
gs://api-project-773889352370-ml/TrainingData/0_5.jpg,Five
The dict file looks like this
$ cat cloudML/dict.txt
Eight
F
Five
Forward_slash
Four
H
Nine
One
Seven
Six
Three
Two
Zero
I had originally had labels like 1,2,3,4 and /, but I changed them to character strings in case they were special characters (especially /). I can see a somewhat similar message here, but that had to do with 0th based indexing.
What's strange about the message is that there are indeed 13 label types. Somehow tensorflow is looking for only 7 (0-6). My questions is what kind of formatting error might make tensorflow think there are less labels then there are. I can confirm that both the testing and training data 80-20 split have all the label classes (though in different frequencies).
I'm running from recent docker build provided by google.
`docker run -it -p "127.0.0.1:8080:8080" --entrypoint=/bin/bash gcr.io/cloud-datalab/datalab:local-20161227
I am submit the training job using
# Submit training job.
gcloud beta ml jobs submit training "$JOB_ID" \
--module-name trainer.task \
--package-path trainer \
--staging-bucket "$BUCKET" \
--region us-central1 \
-- \
--output_path "${GCS_PATH}/training" \
--eval_data_paths "${GCS_PATH}/preproc/eval*" \
--train_data_paths "${GCS_PATH}/preproc/train*"
Full error:
Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, Received a label value of 13 which is outside the valid range of [0, 6). Label values: 6 3 2 7 3 7 6 6 12 6 5 2 3 6 8 8 8 8 4 6 5 13 7 4 8 12 5 2 4 12 12 8 8 8 12 6 4 2 12 4 3 8 2 6 8 12 2 8 4 6 2 4 12 5 5 7 6 2 2 3 2 8 2 5 2 8 2 7 4 12 8 4 2 4 8 2 2 8 2 8 7 6 8 3 5 5 5 8 8 2 5 3 9 8 5 8 3 2 5 4 [[Node: evaluate/xentropy/xentropy = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:master/replica:0/task:0/cpu:0"](final_ops/input/Wx_plus_b/fully_connected_1/BiasAdd, inputs/Squeeze)]] Caused by op u'evaluate/xentropy/xentropy', defined at: File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 545, in <module> tf.app.run() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 43, in run sys.exit(main(sys.argv[:1] + flags_passthrough)) File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 308, in main run(model, argv) File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 439, in run dispatch(args, model, cluster, task) File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 480, in dispatch Trainer(args, model, cluster, task).run_training() File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 187, in run_training self.args.batch_size) File "/root/.local/lib/python2.7/site-packages/trainer/model.py", line 278, in build_train_graph return self.build_graph(data_paths, batch_size, GraphMod.TRAIN) File "/root/.local/lib/python2.7/site-packages/trainer/model.py", line 256, in build_graph loss_value = loss(logits, labels) File "/root/.local/lib/python2.7/site-packages/trainer/model.py", line 396, in loss logits, labels, name='xentropy') File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1544, in sparse_softmax_cross_entropy_with_logits precise_logits, labels, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 2376, in _sparse_softmax_cross_entropy_with_logits features=features, labels=labels, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2238, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1130, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): Received a label value of 13 which is outside the valid range of [0, 6). Label values: 6 3 2 7 3 7 6 6 12 6 5 2 3 6 8 8 8 8 4 6 5 13 7 4 8 12 5 2 4 12 12 8 8 8 12 6 4 2 12 4 3 8 2 6 8 12 2 8 4 6 2 4 12 5 5 7 6 2 2 3 2 8 2 5 2 8 2 7 4 12 8 4 2 4 8 2 2 8 2 8 7 6 8 3 5 5 5 8 8 2 5 3 9 8 5 8 3 2 5 4 [[Node: evaluate/xentropy/xentropy = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:master/replica:0/task:0/cpu:0"](final_ops/input/Wx_plus_b/fully_connected_1/BiasAdd, inputs/Squeeze)]]
Everything looks okay in my bucket
And saving my log events.