0
votes

I used tensorflow1.6 to train a SSD-inception-v2 model from scratch. There were no warnings or errors. Then I exported the model using the following flags :

--pipeline_config_path experiments/ssd_inception_v2/ssd_inception_v2.config
--trained_checkpoint_prefix experiments/ssd_inception_v2/train/model.ckpt-400097
--output_directory experiments/ssd_inception_v2/frozen_graphs/

After that, I uploaded the saved_mode.pb to a Google Cloud Storage bucket, created a model in ml-engine and created a version (I did use the --runtime-version=1.6).

Finally, I used the gcloud command to ask for an online prediction but obtained the following error:

{
"error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="The second input must be a scalar, but it has shape [1]\n\t [[Node: map/while/decode_image/cond_jpeg/cond_png/DecodePng/Switch = Switch[T=DT_STRING, _class=["loc:/TensorArrayReadV3"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/while/TensorArrayReadV3, map/while/decode_image/is_jpeg)]]")"
}

The log describes the problem arisen while the model was executing.

1
Can you please share how you construct the contents of the file you use with the gcloud command? - rhaertel80
Sure, pastebin.com/D6nzSBkG Thanks for the help by the way. - Rodrigo Loza
Do you mind providing the actual contents of the file (with the image contents elided for clarity)? That will help ensure correctness - rhaertel80
Yes, here is the link to a drive folder. drive.google.com/open?id=1a4q-n6OH9yNYtOl3lcptvrRZ3i95cKM- Thanks so much for the help. - Rodrigo Loza

1 Answers

0
votes

The format for a prediction request is (cf the official docs):

{
  "instances": [
    ...
  ]
}

According to this blog post about object detection, the flag --input_type encoded_image_string_tensor produces a model with a single input named inputs, which accepts a batch of JPG or PNG images. Those images have to be base64 encoded. So putting it all together, the actual request should look like:

{
  "instances": [
    {
      "inputs": {
        "b64": "..."
      }
    }
  ]
}

Since there is only a single input, we can use a shorthand which, instaed of an object/dictionary {"inputs": {"b64": ...}} is just the value of the dictionary, i.e., {"b64": ...}:

{
  "instances": [
    {
      "b64": "..."
    }
  ]
}

Note that either one of these is acceptable if there is exactly one input to the model.

Even though the above are the formats of the requests the service accepts, the gcloud command-line tool is not actually expecting the full body of the request. It is expecting only the actual "instances", i.e., the things between [] in the JSON, separated by newlines. That means your file should look like this:

{"b64": "..."}

or this

{"inputs": {"b64": "..."}}

If you want to send more than one image, you have one of those per line in your file.

Try something like the following code to produce the output:

json_data = []
for index, image in enumerate(images, 1):
    with open(image, "rb") as open_file:
        byte_content = open_file.read()
    # Convert to base64
    base64_bytes = b64encode(byte_content)
    # Decode bytes to text
    base64_string = base64_bytes.decode("utf-8")
    # Create dictionary
    raw_data = {"b64": base64_string}

    # Put data to json
    json_data.append(json.dumps(raw_data))

# Write to the file
with open(predict_instance_json, "w") as fp:
    fp.write('\n'.join(json_data))