I have created a Model and have saved it in the path, "/usr/local/google/home/abc/Jupyter_Notebooks/export".
Then, I have committed it to the Tensorflow Serving Docker Container and inferenced that Model and got the results.
Commands to be run in Command Prompt, for achieving what is explained above is shown below:
sudo docker run -d --name sb tensorflow/serving
sudo docker cp /usr/local/google/home/abc/Jupyter_Notebooks/export sb:/models/export
sudo docker commit --change "ENV MODEL_NAME export" sb rak_iris_container
sudo docker kill sb
sudo docker pull tensorflow/serving
sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/abc/Jupyter_Notebooks/export,target=/models/export -e MODEL_NAME=export -t tensorflow/serving &
saved_model_cli show --dir /usr/local/google/home/abc/Jupyter_Notebooks/export/1554294699 --all
curl -d '{"examples":[{"SepalLength":[5.1],"SepalWidth":[3.3],"PetalLength":[1.7],"PetalWidth":[0.5]}]}' \
-X POST http://localhost:8501/v1/models/export:classify
Output of above inference is
{
"results": [[["0", 0.998091], ["1", 0.00190929], ["2", 1.46236e-08]]
]
}
Model is saved using the code mentioned below:
feature_spec = tf.feature_column.make_parse_example_spec(my_feature_columns)
serving_input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
export_dir = classifier.export_savedmodel('export', serving_input_receiver_fn)
print('Exported to {}'.format(export_dir))
Output of above command is:
Exported to b'/usr/local/google/home/abc/Jupyter_Notebooks/export/1554980806'