0
votes

I am sending a base64 encoded image via AJAX POST to a model stored in Google CloudML. I am getting an error telling me that my input_fn(): is failing to decode the image and transform it into jpeg.

Error:

Prediction failed: Error during model execution: 
AbortionError(code=StatusCode.INVALID_ARGUMENT,  
details="Expected image (JPEG, PNG, or GIF), got 
unknown format starting with 'u\253Z\212f\240{\370
\351z\006\332\261\356\270\377' [[{{node map/while
/DecodeJpeg}} = DecodeJpeg[_output_shapes=
[[?,?,3]], acceptable_fraction=1, channels=3, 
dct_method="", fancy_upscaling=true, ratio=1, 
try_recover_truncated=false, 
_device="/job:localhost/replica:0 /task:0
/device:CPU:0"](map/while/TensorArrayReadV3)]]") 

Below is the full Serving_input_receiver_fn():

  1. The first step I believe is to handle the incoming b64 encoded string and decode it. This is done with:

    image = tensorflow.io.decode_base64(image_str_tensor)

  2. The next step I believe is to open the bytes, but this is where I dont know how to handle the decoded b64 string with tensorflow code and need help.

With a python Flask app this can be done with:

image = Image.open(io.BytesIO(decoded))
  1. pass the bytes through to get decoded by tf.image.decode_jpeg ????

image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)

Full input_fn(): code

def serving_input_receiver_fn(): 
   def prepare_image(image_str_tensor): 
   image = tensorflow.io.decode_base64(image_str_tensor)
   image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
   image = tensorflow.expand_dims(image, 0) image = tensorflow.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
   image = tensorflow.squeeze(image, axis=[0])    
   image = tensorflow.cast(image, dtype=tensorflow.uint8) 
   return image

How do I decode my b64 string back into jpeg and then convert the jpeg to a tensor?

1
Can you show your image_str_tensor? AFAIK TF serving http can encode the image base64 string as JSON. - johnjohnlys
Thanks for pointing out the image_str_tensor, johnjohnlys. Something that has been bothering me. It just shows up in this answer: (stackoverflow.com/questions/51432589/…) and this answer: (github.com/mhwilder/tf-keras-gcloud-deployment/blob/master/…) -- as you can see, I just copied from the first answer and it appears image_str_tensor just passes through without being declared. - Pysnek313
I already have my image encoded and sent with json.stringify via Ajax post. It's decoding and converting back to jpeg once it gets to the serving input receiver function. - Pysnek313
to send an image (binary) input as part of http payload, ml engine requires you to send it as base64 encoded. ML engine will decode it before sending it to your graph. So you don't need a decode in your graph. - Bhupesh

1 Answers

0
votes

This is a sample for processing b64 images.

HEIGHT = 224
WIDTH = 224
CHANNELS = 3
IMAGE_SHAPE = (HEIGHT, WIDTH)
version = 'v1'

def serving_input_receiver_fn():
    def prepare_image(image_str_tensor):
        image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
        return image_preprocessing(image)

    input_ph = tf.placeholder(tf.string, shape=[None])
    images_tensor = tf.map_fn(
        prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
    images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)

    return tf.estimator.export.ServingInputReceiver(
        {'input': images_tensor},
        {'image_bytes': input_ph})

export_path = os.path.join('/tmp/models/json_b64', version)
if os.path.exists(export_path):  # clean up old exports with this version
    shutil.rmtree(export_path)
estimator.export_savedmodel(
    export_path,
    serving_input_receiver_fn=serving_input_receiver_fn)