I am having trouble deploying an AutoML model created using BigQuery to AI Platform for online prediction.
I have created an AutoML model in BigQuery using the standard procedure via SQL:
CREATE OR REPLACE MODEL `model_name`
OPTIONS
(model_type='automl_regressor', budget_hours=2.0,
... ) AS
SELECT ...)
This works fine and I am able to get predicted results successfully. I now wanted to deploy it for online prediction. To do so I exported the model to a GCS bucket via the Export Model function in the BigQuery Cloud Console. This gives me a directory in the bucket with the following contents:
assets/
saved_model.pb
variables/
I then went to the AI Platform console and created a Model and then proceeded to create a Version for that model with the following pre-built container settings:
- Python Version: 3.7
- Framework: TensorFlow
- Framework Version: 2.3.1
- ML runtime version: 2.3
I have set the Cloud Storage path to the bucket with the directory containing the contents that I listed above and proceeded to create the Version for my Model. Upon doing so I get this error after some time:
Create Version failed. Bad model detected with error: "Failed to load model: Loading servable: {name: default version: 1} failed: Not found: Op type not registered 'DecodeProtoSparseV2' in binary running on localhost. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.\n\n (Error code: 0)"
Kind of stumped here as I thought this was the way I could utilize the ML model generated from BigQuery. Is there anything wrong with the steps here? Is it even possible to deploy such a model for online prediction currently? If not, is there a way I can convert the model so that it can be deployed? Any help would be appreciated!