3
votes

I'm trying to get KeithIto's Tacotron model run on Intel OpenVINO with NCS. The model optimizer fails to convert the frozen model to IR format.

After asking in the Intel Forum, I was told the 2018 R5 release didn't have GRU support and I changed it to LSTM cells. But the model still runs well in tensorflow after training it. Also I updated my OpenVINO to 2019 R1 release. But the optimizer still threw errors. The model has mainly two input nodes: inputs[N,T_in] and input_lengths[N]; where N is batch size, T_in is number of steps in the input time series, and values are character IDs with default shapes as [1,?] and [1]. The problem is with [1,?] as model optimizer doesn't allow for dynamic shapes. I tried different values and it always throws some errors.

I tried frozen graphs with output node "model/griffinlim/Squeeze" which is the final decoder output and also with "model/inference/dense/BiasAdd" as mentioned in (https://github.com/keithito/tacotron/issues/95#issuecomment-362854371) which is the input for the Griffin-lim vocoder so that I can do the Spectrogram2Wav part outside the model and reduce it's complexity.

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>python mo_tf.py --input_model "D:\Programming\LSTM\logs-tacotron\freezeinf.pb" --freeze_placeholder_with_value "input_lengths->[1]" --input inputs --input_shape [1,128] --output model/inference/dense/BiasAdd
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      D:\Programming\Thesis\LSTM\logs-tacotron\freezeinf.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
        - IR output name:       freezeinf
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         inputs
        - Output layers:        model/inference/dense/BiasAdd
        - Input shapes:         [1,128]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        2019.1.0-341-gc9b66a2
[ ERROR ]  Shape [  1  -1 128] is not fully defined for output 0 of "model/inference/post_cbhg/conv_bank/conv1d_8/batch_normalization/batchnorm/mul_1". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "model/inference/post_cbhg/conv_bank/conv1d_8/batch_normalization/batchnorm/mul_1".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "model/inference/post_cbhg/conv_bank/conv1d_8/batch_normalization/batchnorm/mul_1".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_eltwise_ext.<locals>.<lambda> at 0x000001F00598FE18>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "model/inference/post_cbhg/conv_bank/conv1d_8/batch_normalization/batchnorm/mul_1" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

I also tried different methods for freezing the graph.

METHODS 1: Using freeze_graph.py provided in Tensorflow after dumping graph with:

tf.train.write_graph(self.session.graph.as_graph_def(), "models/", "graph.pb", as_text=True)

followed by:

python freeze_graph.py --input_graph .\models\graph.pb  --output_node_names "model/griffinlim/Squeeze" --output_graph .\logs-tacotron\freezeinf.pb --input_checkpoint .\logs-tacotron\model.ckpt-33000 --input_binary=true

METHODS 2: Using the following code after loading the model:

frozen = tf.graph_util.convert_variables_to_constants(self.session,self.session.graph_def, ["model/inference/dense/BiasAdd"]) #model/griffinlim/Squeeze
graph_io.write_graph(frozen, "models/", "freezeinf.pb", as_text=False)

I expected the BatchNormalization and Dropout layers to be removed after the freezing, but looking at the errors it seems that it still exists.

Environment

OS: Windows 10 Pro

Python 3.6.5

Tensorflow 1.12.0

OpenVINO 2019 R1 release

Can anyone help with the above problems with the optimizer?

1
Have you tried to use the hint from logs: Use --input_shape with positive integers to override model input shapes ?kuszi
Yes I had tried. But there are a lot of nodes(really a lot) in the convolution bank that has variable shapes and is simply not feasible to provide input shapes for all. I expected the model optimizer to infer the shapes from the input shapes which I had provided. But it simply doesn't. The DeepSpeech model guide provided by Intel contains similar structure but somehow the frozen graph has a fixed shape for nodes when I check in Tensorboard. Is there any way to freeze my graph with fixed shape and no BatchNorms & Dropout layers?Sujeendran Menon

1 Answers

0
votes

OpenVINO does not support this model yet. We will keep you updated when it will be.