0
votes

I am trying to reuse with tensorflowjs models created by tensorflow. In order to understand how the converter works I have tried to convert the mobilenetv2 model:

tensorflowjs_converter --input_format=tf_hub --output_format=tensorflowjs   'https://tfhub.dev/google/imagenet/mobilenet_v2_050_224/classification/2' ./web_model

That seems to work. Then I have tried to used this new converted model within the mobilenet demo by changing the way the model is loaded:

// const model = await mobilenet.load({version, alpha});
// replaced by
const model = await mobilenet.load({ modelUrl: './web_model/model.json', version, alpha, inputRange: [0, 1], fromTFHub: true });

// Classify the image.
const predictions = await model.classify(img);

The classify call triggers an error:

Uncaught (in promise) Error: Activation relu6 has not been implemented for the WebGL backend.

I have no clue on how the official tensorflowjs mobilenet model has been generated :(

3
There is no fromTFhub property - edkeveked

3 Answers

0
votes
from keras.applications import MobileNetV2
model = MobileNetV2(weights='imagenet', include_top=False)

save_model(
    model,
    "mobilenet2.h5",
    overwrite=True,
)

Convert mobilenet feature extractor to js

tensorflowjs_converter --input_format keras \
                       path/to/mobilenet2.h5 \
                       path/to/tfjs_target_dir
0
votes

The operator of relu6 has been just added 1 week ago. It should be available in the next TensorFlow.js release.

Please try to use the latest version once it's released.

See: https://github.com/tensorflow/tfjs/pull/2016

0
votes

this issue has nothing to do with new release. I had same issue and went round in circles. If you are working in GPU runtime (i used Colab GPU runtime), this issue happens. You just have to fit/fit_generate models in CPU mode, and your model will be ready in happy state.