0
votes

I'm getting the following error when i try to convert my trained tensorflow model to IR

Model Optimizer arguments: Common parameters: - Path to the Input Model: None - Path for generated IR: /home/ec2-user/Notebooks/. - IR output name: saved_model - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: [1,512,512,3] - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: None - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Use the config file: None Model Optimizer version: 2021.1.0-1237-bece22ac675-releases/2021/1 2021-01-06 09:55:53.886652: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2021/data_processing/dl_streamer/lib:/opt/intel/openvino_2021/data_processing/gstreamer/lib:/opt/intel/openvino_2021/opencv/lib:/opt/intel/openvino_2021/deployment_tools/ngraph/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl_unite/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/tbb/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64 2021-01-06 09:55:53.886685: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2021-01-06 09:55:57.571164: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2021/data_processing/dl_streamer/lib:/opt/intel/openvino_2021/data_processing/gstreamer/lib:/opt/intel/openvino_2021/opencv/lib:/opt/intel/openvino_2021/deployment_tools/ngraph/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl_unite/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/tbb/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64 2021-01-06 09:55:57.571198: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303) 2021-01-06 09:55:57.571216: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ip-172-31-65-233.ec2.internal): /proc/driver/nvidia/version does not exist 2021-01-06 09:55:57.571389: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-01-06 09:55:57.607790: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2999980000 Hz 2021-01-06 09:55:57.608150: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3d4d800 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-01-06 09:55:57.608169: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2021-01-06 09:56:15.612182: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 2021-01-06 09:56:15.613446: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session 2021-01-06 09:56:16.010348: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816] Optimization results for grappler item: graph_to_optimize 2021-01-06 09:56:16.010388: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: Graph size after: 5850 nodes (5149), 13416 edges (12708), time = 182.03ms. 2021-01-06 09:56:16.010397: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: Graph size after: 5850 nodes (0), 13416 edges (0), time = 88.009ms. 2021-01-06 09:56:16.010404: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816] Optimization results for grappler item: __inference_Preprocessor_ResizeToRange_cond_false_12695_58123 2021-01-06 09:56:16.010409: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: function_optimizer did nothing. time = 0.002ms. 2021-01-06 09:56:16.010416: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: function_optimizer did nothing. time = 0ms. 2021-01-06 09:56:16.010422: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:816] Optimization results for grappler item: __inference_Preprocessor_ResizeToRange_cond_true_12694_56958 2021-01-06 09:56:16.010428: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: function_optimizer did nothing. time = 0.002ms. 2021-01-06 09:56:16.010433: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:818] function_optimizer: function_optimizer did nothing. time = 0.001ms. [ ERROR ] Cannot infer shapes or values for node "StatefulPartitionedCall/Preprocessor/ResizeToRange/cond". [ ERROR ] Function __inference_Preprocessor_ResizeToRange_cond_true_12694_56958 is not defined. [ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f1fbc2c8050>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "StatefulPartitionedCall/Preprocessor/ResizeToRange/cond" node. For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)

Debug log file: [Debug_Log][1]

Model Folder: [Model][2] [1]: https://drive.google.com/file/d/1VFlAW7C0RhmL-T1HrKwBMrvQzJNvCql7/view?usp=sharing [2]: https://drive.google.com/drive/folders/1kkbp9fAXXsiDeq583Z0tV95zIfbI9_-N?usp=sharing

1

1 Answers

1
votes

First and foremost, please take note that only these OS are supported by OpenVINO:

Ubuntu 18.04.x long-term support (LTS), 64-bit

CentOS 7.6, 64-bit (for target only)

Yocto Project v3.0, 64-bit (for target only and requires modifications)

Windows 10

Raspbian* Buster, 32-bit

Raspbian* Stretch, 32-bit

MacOS

Based on your OS, make sure that you had setup OpenVINO correctly and you are able to run the sample application as in here

Once you got that ready, Cross-check your Tensorflow topology and see whether it's listed here

Models and topologies listed there are supported by OpenVINO.

You may proceed to the next step according to the guide above once you are sure of yours.

NOTE: Please ensure that you had run the setupvars and able to see the initialization message each time you open a new terminal.