0
votes

I have trained an AI model with the help of tensorflow and need to use Google AI Platform to serve predictions.

Now AI Platform specifies that the model needs to be in a specific 'SavedModel' format for me to upload the model to cloud and serve predictions.

How do I convert the model to the specified 'SavedModel' format?

Also, are there any end-to-end tutorials available that would help me do the same?

1
When you define your model, you have to define an exporter. This exporter is, most of time, a storage bucket. Did you have this ? - guillaume blaquiere
if you need more specific help, please post the way you are saving it now. - Lak
Can I ask you @guillaumeblaquiere where you define such an exporter? In the pipeline config file? - Patrick
With tensorflow, you have it in your training loop. But you don't mention that you use tensorflow. And why are you talking about pipeline? Can you precise your question/context, with code sample? - guillaume blaquiere

1 Answers

1
votes

In a standard training loop, you should have code like this at the end

.....
def train_and_evaluate(output_dir, hparams):
    get_train = read_dataset(hparams['train_data_path'],
                             tf.estimator.ModeKeys.TRAIN,
                             hparams['train_batch_size'])
    get_valid = read_dataset(hparams['eval_data_path'],
                             tf.estimator.ModeKeys.EVAL,
                             1000)
    estimator = tf.estimator.Estimator(model_fn=sequence_regressor,
                                       params=hparams,
                                       config=tf.estimator.RunConfig(
                                           save_checkpoints_steps=
                                           hparams['save_checkpoint_steps']),
                                       model_dir=output_dir)
    train_spec = tf.estimator.TrainSpec(input_fn=get_train,
                                        max_steps=hparams['train_steps'])
    exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
    eval_spec = tf.estimator.EvalSpec(input_fn=get_valid,
                                      steps=None,
                                      exporters=exporter,
                                      start_delay_secs=hparams['eval_delay_secs'],
                                      throttle_secs=hparams['min_eval_frequency'])
    tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)

especially this part

estimator = tf.estimator.Estimator(model_fn=sequence_regressor,
                                       params=hparams,
                                       config=tf.estimator.RunConfig(
                                           save_checkpoints_steps=
                                           hparams['save_checkpoint_steps']),
                                       model_dir=output_dir)

Here you specify after how many steps (save_checkpoints_steps) you export your model to the output_dir.

Do you have something like this in your code?