1
votes

I just finished the TensorFlow for Poets 2: TFLite tutorial (https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#0). At the end of the tutorial I was able to run the MobileNet provided in the tutorial on my phone.

Now I am trying to replace the MobileNet from the tutorial with a MobileNet I have trained from scratch using the MobileNet from the Tensorflow repository. However, when trying to use TOCO, I get a segmentation fault. If I use optimize_for_inference, I get warnings like

WARNING:tensorflow:Incorrect shape for mean, found (0,), expected (32,), for node MobilenetV1/Conv2d_0/BatchNorm/FusedBatchNorm

WARNING:tensorflow:Didn't find expected Conv2D input to 'MobilenetV1/Conv2d_1_depthwise/BatchNorm/FusedBatchNorm'

I compared the graph of the pb file from the tutorial to the one from the repository, and I noticed that there is a difference in the way the batch norm is represented. Basically the batch-norm of the model provided with the tutorial just has a convolution followed by an addition, while the model from the repository has a FusedBatchNorm operator. I have also tried to set fused=False but then I get this error:

/opt/conda/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.

from ._conv import register_converters as _register_converters

2018-05-09 11:51:38.419786: W tensorflow/contrib/lite/toco/toco_cmdline_flags.cc:178] --input_type is deprecated. It was an ambiguous flag that set both --input_data_types and --inference_input_type. If you are trying to complement the input file with information about the type of input arrays, use --input_data_type. If you are trying to control the quantization/dequantization of real-numbers input arrays in the output file, use --inference_input_type.

2018-05-09 11:51:38.781372: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.781693: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.781864: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.782019: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.782181: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.782329: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.782508: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.782663: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.782851: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.783009: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.783211: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.783352: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.783561: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1057] Converting unsupported operation: SquaredDifference

2018-05-09 11:51:38.832089: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 656 operators, 874 arrays (0 quantized)

2018-05-09 11:51:39.037810: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 222 operators, 435 arrays (0 quantized)

2018-05-09 11:51:39.041366: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 222 operators, 435 arrays (0 quantized)

2018-05-09 11:51:39.044092: I tensorflow/contrib/lite/toco/allocate_transient_arrays.cc:313] Total transient array allocated size: 4333824 bytes, theoretical optimal value: 4333696 bytes.

2018-05-09 11:51:39.045179: F tensorflow/contrib/lite/toco/tflite/export.cc:303] Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops. Here is a list of operators for which you will need custom implementations: Mean, RSQRT, SquaredDifference, Squeeze.

Aborted (core dumped)

I assume it is possible to convert the FusedBatchNorm operator into a convolution / addition combination using TOCO or another script. Is that true? And if so where can I find the conversion script?

Batch norm representation of MobileNet from the tutorial: image1

Batch norm representation of MobileNet from the repository: image2

2

2 Answers

0
votes

TOCO should automatically fold batch norms whether they are fused or unfused. This is the overall flow:

  1. Train model
  2. Make Eval model and freeze with with train checkpoint OR export saved model
  3. Provide frozen graph or SavedModel to tflite_convert
  4. Run inference

You may encounter unsupported ops. To avoid that make sure you are providing the correct input and outputs to your conversion.

hope that helps!

-1
votes

TensorFlow lite automatically fuse BachNorm Layers, because with frozen graph the Bach norm parameters are constant . better try tf2 nightly edition