1
votes

I'm trying to run a trained model of this tutorial with our own CSV dataset with each row composed of 4 int values (the last being the label) at a distance using TensorFlow Serving:

I'm running TensorFlow Serving using Docker at a distance and my dev environement is Windows using Python 3.6.

I export my model using the following code, similar to the example given here:

feature_spec = {'firstInt': tf.FixedLenFeature([1], tf.int64),
                'secondInt': tf.FixedLenFeature([1], tf.int64),
                'thirdInt': tf.FixedLenFeature([1], tf.int64)}

def serving_input_receiver_fn():
    serialized_tf_example = tf.placeholder(dtype=tf.string,
                                           shape=[None],
                                           name='input_example_tensor')
    receiver_tensors = {'examples': serialized_tf_example}
    features = tf.parse_example(serialized_tf_example, feature_spec)
    return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

classifier.export_savedmodel(
    '.\\SaveLC\\save_dir',
    serving_input_receiver_fn=serving_input_receiver_fn)

I'm honestly not sure what to expect as a result, but in this guide , the half_plus_two model puts out a predictions array like this

"predictions": [
        2.5,
        3,
        4.5
    ]

when it's sent a POST request like this

{"instances": [1.0, 2.0, 5.0]}

So I imagine something similar should be returned but instead I'm told a servable does not even exist. Of note, it seems to do this with other models provided in the guide as well.

"error": "Servable not found for request: Latest(save_dir_new)"

What is this servable and how is it meant to be exported if my current method doesn't currently work?

Thanks

1

1 Answers

0
votes

It looks like the path which you saved the Model, .\\SaveLC\\save_dir and the path in which you use to Start TensorFlow Serving container and open the REST API port might not be the same.

The code for Starting TensorFlow Serving container and open the REST API port is:

docker run -t --rm -p 8501:8501 \
    -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
    -e MODEL_NAME=half_plus_two \
    tensorflow/serving &

The path before the colon, in this example, $TESTDATA/saved_model_half_plus_two_cpu and the path which you save our Model, should be the same, and it should be complete path, if we execute the command from the Terminal or the Command Prompt.

One more important point about the Path of the Model is that we should not include the Version Number or the Time Stamp Value in the above command.