2
votes

I have a tensorflow model which takes two 2d arrays as input.

This is how I trained the model.

x.shape == (100, 250)
y.shape == (100, 10)

model.fit([x,y], y_train)

Now I'm using tensorflow serving API for deploying into production. Now when I tried to make a api request for predictions I'm getting an error

"{'error': 'instances is a plain list, but expecting list of objects as multiple input tensors required as per tensorinfo_map'}"

data1 = json.dumps({"signature_name": "serving_default",
                   "instances": [x.tolist(), y.tolist()]})
# both x and y are numpy 2d array 

json_response = requests.post('http://44.287.13.8:9000/v1/models/context_model/versions/1:predict',
                               data=data1, headers=headers)

pred = json.loads(json_response.text)

print(pred)
{'error': 'instances is a plain list, but expecting list of objects as multiple input tensors required as per tensorinfo_map'}
2
Can you share your complete code so that we can help you. Also, share your SignatureDef by running the command, !saved_model_cli show --dir PathOfModel \ --tag_set serve --signature_def predict. Thanks! - Tensorflow Support
did you find a solution to this? - bonobo
@bonobo If I had multiple inputs, then while training I'm passing the inputs as dict with keys as input name and values as input values. In the way, while sending input to tensor flow serving I can pass it as dict with multiple inputs with appropriate keys. - user_12

2 Answers

0
votes

I had a quite similar problem. I wanted to test my saved model in combination with the serving API. First I wrote the following POST command to get the prediction: curl -g -d "{""instances""":" [[[158, 194, 8102, 5294, 15.404460999999998, 47.241882000000004, 1]]]}" -X POST http://localhost:8501/v1/models/my_model:predict. With this input I got the same error as you. As mentioned here https://www.tensorflow.org/tfx/serving/api_rest#predict_api, you have to serve the input data in the following way: curl -d "{"""instances""": [{"""NAME1W1""": [158], """NAME1W2""": [194], """ZIP""": [""8102""], """STREETW""": [5294], """LONGITUDE""": 15.404460999999998, """LATITUDE""": 47.241882000000004,"""ASG""": [1] }]}" -X POST http://localhost:8501/v1/models/my_model:predict. Don’t get confused of the “””, I had to use them because I am using curl on Windows and not on Linux.

0
votes

When you have multiple inputs, you need to know the names of inputs from GraphDef and pass a list of dict to tensorflow serving

First, make sure that you have correct input names by running this command in terminal

saved_model_cli show --dir /home/your_user_name/path/to/your/saved_model/1 --all

In my case, I get the following as part of the output. What is important is that the names of the inputs are input_1 and input_2, again in my case.

signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['input_1'] tensor_info: dtype: DT_FLOAT shape: (-1, 2) name: serving_default_input_1:0 inputs['input_2'] tensor_info: dtype: DT_FLOAT shape: (-1, 2) name: serving_default_input_2:0

Also, as you can see, each input takes 2 fields, respectively.

Finally, I can send an http request for a prediction with input_1 of [1.,0.] and input_2 of [0.36708057, 0.66139287]

import requests
import json

data = json.dumps({"signature_name":"serving_default","instances":[{'input_1':[1., 0.],'input_2':[0.36708057, 0.66139287]}]})
headers = {"content-type":"application/json"}
json_response = requests.post('http://localhost:8501/v1/models/test:predict',data=data,headers=headers)
print(json.loads(json_response.text))

The output was {'predictions':[[0.315578]]}