0
votes

I have trained my ssdlite_mobilenet_v3 in tensorflow and export as frozen_inference_graph.pb. I am able to run it. Now I would like to convert to openvino Inference Engine files (.xml and .bin). But I encounter following errors. I include my command line below and also you may download my model files in a sample_model_inference.zip here. Could anyone help me to find out what's missing? or how to fix it. Thanks a lot.

Command line:

mo_tf.py --input_model ../sample_model_inference/frozen_inference_graph.pb --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config ../sample_model_inference/pipeline.config

Error messages:

(openvino) paul@tensor:~/tf1.15/models/research/object_detection/samples/sample_model_ir$ mo_tf.py --input_model ../sample_model_inference/frozen_inference_graph.pb --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config ../sample_model_inference/pipeline.config Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/paul/tf1.15/models/research/object_detection/samples/sample_model_ir/../sample_model_inference/frozen_inference_graph.pb - Path for generated IR: /home/paul/tf1.15/models/research/object_detection/samples/sample_model_ir/. - IR output name: frozen_inference_graph - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: /home/paul/tf1.15/models/research/object_detection/samples/sample_model_ir/../sample_model_inference/pipeline.config - Operations to offload: None - Patterns to offload: None - Use the config file: None Model Optimizer version: 2020.1.0-61-gd349c3ba4a /home/paul/openvino/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:493: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/paul/openvino/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:494: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/paul/openvino/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:495: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/paul/openvino/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:496: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/paul/openvino/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:497: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/paul/openvino/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:502: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. [ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement': It means model and custom replacement description are incompatible. Try to correct custom replacement description according to documentation with respect to model node names [ ERROR ] Cannot infer shapes or values for node "Postprocessor/Cast_1". [ ERROR ] 0 [ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function . [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ANALYSIS INFO ] Your model looks like TensorFlow Object Detection API Model. Check if all parameters are specified: --tensorflow_use_custom_operations_config --tensorflow_object_detection_api_pipeline_config --input_shape (optional) --reverse_input_channels (if you convert a model to use with the Inference Engine sample applications) Detailed information about conversion of this model can be found at https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html [ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (): Stopped shape/value propagation at "Postprocessor/Cast_1" node. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

1
I am encountering the same issues. I will let you know if i succeed.Usama Ahmed

1 Answers

1
votes

You should use transformation config according to your tensorflow version e.g. 1.14 or 1.15

For example

mo_tf.py --input_model ../sample_model_inference/frozen_inference_graph.pb --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config ../sample_model_inference/pipeline.config

OR

mo_tf.py --input_model ../sample_model_inference/frozen_inference_graph.pb --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.15.json --tensorflow_object_detection_api_pipeline_config ../sample_model_inference/pipeline.config