I have a model stored as an hdf5 which I export to a protobuf (PB) file using saved_model.save, like this:
from tensorflow import keras
import tensorflow as tf
model = keras.models.load_model("model.hdf5")
tf.saved_model.save(model, './output_dir/')
this works fine and the result is a saved_model.pb file which I can later view with other software with no issues.
However, when I try to import this PB file using TensorFlow1, my code fails. As PB is supposed to be a universal format, this confuses me.
The code I use to read the PB file is this:
import tensorflow as tf
curr_graph = tf.Graph()
curr_sess = tf.InteractiveSession(graph=curr_graph)
f = tf.gfile.GFile('model.hdf5','rb')
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
f.close()
This is the exception I get:
Traceback (most recent call last): File "read_pb.py", line 14, in graph_def.ParseFromString(f.read()) google.protobuf.message.DecodeError: Error parsing message
I have a different model stored as a PB file on which the reading code works fine.
What's going on?
***** EDIT 1 *****
While using Andrea Angeli's code below, I've encountered the following error:
Encountered Error: NodeDef mentions attr 'exponential_avg_factor' not in Op y:T, batch_mean:U, batch_variance:U, reserve_space_1:U, reserve_space_2:U, reserve_space_3:U; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT]; attr=U:type,allowed=[DT_FLOAT]; attr=epsilon:float,default=0.0001; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=is_training:bool,default=true>; NodeDef: {node u-mobilenetv2/bn_Conv1/FusedBatchNormV3}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
is there a workaround for this?