2
votes

Batch prediction on the Google ML Engine is throwing an exception.

I have a tensorflow model on the Google ML Engine. My model does an inference on an image. The input is not a normal array though. The model only accepts the b64 encoded string of the image.

My input is a text file hosted in a bucket on the Google Cloud Platform. The file looks like this:

{"image": "b64 encoded image string", "key": "0"}
{"image": "b64 encoded image string", "key": "1"}
{"image": "b64 encoded image string", "key": "2"}

The output should be the infrence on all images but I get the following exception:

('Exception during running the graph: Expected image (JPEG, PNG, or GIF), got unknown format starting with \'{\\"image\\": \\"/9j/4\'\n\t [[node encoder/map/while/DecodeJpeg (defined at /usr/local/lib/python2.7/dist-packages/google/cloud/ml/prediction/frameworks/tf_prediction_lib.py:210) ]]', 1)
1
what is the dataformat you are using when submitting your job? Is it JSON? - Bhupesh
Yes, the dataformat is JSON when I submit the job. - Sharif Elfouly

1 Answers

1
votes

If your model "only accepts the b64 encoded string of the image", meaning the signature has one input tensor and its data type is string, then ML Engine feeds each line of the input file directly into the single string input tensor. In this case no JSON/b64 decoding is done, so your model will get the whole line as input.

If this is the case for your model, then you can solve the exception by adding a new op to your model that handles the "key" field in your input.