1
votes

The documentations of how to use SageMaker estimators are scattered around, sometimes obsolete, incorrect. Is there a one stop location which gives the comprehensive views of how to use SageMaker SDK Estimator to train and save models?

1

1 Answers

0
votes

Answer

There is no one such resource from AWS that provides the comprehensive view of how to use SageMaker SDK Estimator to train and save models.

Alternative Overview Diagram

I put a diagram and brief explanation to get the overview on how SageMaker Estimator runs a training.

  1. SageMaker sets up a docker container for a training job where:

    • Environment variables are set as in SageMaker Docker Container. Environment Variables.
    • Training data is setup under /opt/ml/input/data.
    • Training script codes are setup under /opt/ml/code.
    • /opt/ml/model and /opt/ml/output directories are setup to store training outputs.
/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json  <--- From Estimator hyperparameter arg
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>        <--- From Estimator fit method inputs arg
│           └── <input data>
├── code
│   └── <code files>              <--- From Estimator src_dir arg
├── model
│   └── <model files>             <--- Location to save the trained model artifacts
└── output
    └── failure                   <--- Training job failure logs
  1. SageMaker Estimator fit(inputs) method executes the training script. Estimator hyperparameters and fit method inputs are provided as its command line arguments.

  2. The training script saves the model artifacts in the /opt/ml/model once the training is completed.

  3. SageMaker archives the artifacts under /opt/ml/model into model.tar.gz and save it to the S3 location specified to output_path Estimator parameter.

  4. You can set Estimator metric_definitions parameter to extract model metrics from the training logs. Then you can monitor the training progress in the SageMaker console metrics.

enter image description here

I believe AWS needs to stop mass-producing verbose, redundant, wordy, scattered, and obsolete documents. AWS needs to understand A picture is worth a thousand words.

Center the documentation around diagrams and piece parts together under the context with a clear objective to achieve.


Problem

AWS documentations need serious re-design and re-structuring. Just to understand how to train and save a model forces us going through dozens of scattered, fragmented, verbose, redundant documentations, which are often obsolete, incomplete, and sometime incorrect.

It is well-summarized in Why I think GCP is better than AWS:

It’s not that AWS is harder to use than GCP, it’s that it is needlessly hard; a disjointed, sprawl of infrastructure primitives with poor cohesion between them.

A challenge is nice, a confusing mess is not, and the problem with AWS is that a large part of your working hours will be spent untangling their documentation and weeding through features and products to find what you want, rather than focusing on cool interesting challenges.

Especially the SageMaker team keeps changing implementations without updating documents. Its roll-out was also inconsistent, e.g. SDK version 2 was rolled out in the SageMaker instance making the AWS examples in Github incompatible without announcing it. Whereas SageMaker Studio still had SDK 1, hence code worked in Studio but not in Notebook instance.

It is mind-boggling, even insane, that we have to go through so many documents to understand how to use the SageMaker SDK Estimator for training.

Documents for Model Training

This document gives also gives 20,000 feet overview of how SageMaker training but does not give any clue what to do.

This document gives a overview of how SageMaker training. However, this is not up-to-date as it is based on SageMaker Containers which is obsolete.

WARNING: This package has been deprecated. Please use the SageMaker Training Toolkit for model training and the SageMaker Inference Toolkit for model serving.

This document layouts the steps for training.

The Amazon SageMaker Python SDK provides framework estimators and generic estimators to train your model while orchestrating the machine learning (ML) lifecycle accessing the SageMaker features for training and the AWS infrastructures

To train a model by using the SageMaker Python SDK, you:

  • Prepare a training script
  • Create an estimator
  • Call the fit method of the estimator

Finally this document gives concrete steps and ideas. However still missing comprehensiv details about Environment Variables, Directory structure in the SageMaker docker container**, S3 for uploading code, placing data, S3 where the trained model is saved, etc.

This documents is focused on TensorFlow Estimator implementation steps. Use Training a Tensorflow Model on MNIST Github example to accompany with to follow the actual implementation.

Documents for passing parameters and data locations

This section explains how SageMaker makes training information, such as training data, hyperparameters, and other configuration information, available to your Docker container.

This document finally gives the idea of how parameters and data are passed around but again, not comprehensive.

This documentation is marked as deprecated but the only document which explains the SageMaker Environment Variables.

IMPORTANT ENVIRONMENT VARIABLES

  • SM_MODEL_DIR
  • SM_CHANNELS
  • SM_CHANNEL_{channel_name}
  • SM_HPS
  • SM_HP_{hyperparameter_name}
  • SM_CURRENT_HOST
  • SM_HOSTS
  • SM_NUM_GPUS

List of provided environment variables by SageMaker Containers

  • SM_NUM_CPUS
  • SM_LOG_LEVEL
  • SM_NETWORK_INTERFACE_NAME
  • SM_USER_ARGS
  • SM_INPUT_DIR
  • SM_INPUT_CONFIG_DIR
  • SM_OUTPUT_DATA_DIR
  • SM_RESOURCE_CONFIG
  • SM_INPUT_DATA_CONFIG
  • SM_TRAINING_ENV

Documents for SageMaker Docker Container Directory Structure

/opt/ml
├── input
│   ├── config
│   │   ├── hyperparameters.json
│   │   └── resourceConfig.json
│   └── data
│       └── <channel_name>
│           └── <input data>
├── model
│   └── <model files>
└── output
    └── failure

This document explains the directory structure and purpose of each directory.

The input

  • /opt/ml/input/config contains information to control how your program runs. hyperparameters.json is a JSON-formatted dictionary of hyperparameter names to values. These values will always be strings, so you may need to convert them. resourceConfig.json is a JSON-formatted file that describes the network layout used for distributed training. Since scikit-learn doesn’t support distributed training, we’ll ignore it here.
  • /opt/ml/input/data/<channel_name>/ (for File mode) contains the input data for that channel. The channels are created based on the call to CreateTrainingJob but it’s generally important that channels match what the algorithm expects. The files for each channel will be copied from S3 to this directory, preserving the tree structure indicated by the S3 key structure.
  • /opt/ml/input/data/<channel_name>_<epoch_number> (for Pipe mode) is the pipe for a given epoch. Epochs start at zero and go up by one each time you read them. There is no limit to the number of epochs that you can run, but you must close each pipe before reading the next epoch.

The output

  • /opt/ml/model/ is the directory where you write the model that your algorithm generates. Your model can be in any format that you want. It can be a single file or a whole directory tree. SageMaker will package any files in this directory into a compressed tar archive file. This file will be available at the S3 location returned in the DescribeTrainingJob result.
  • /opt/ml/output is a directory where the algorithm can write a file failure that describes why the job failed. The contents of this file will be returned in the FailureReason field of the DescribeTrainingJob result. For jobs that succeed, there is no reason to write this file as it will be ignored.

However, this is not up-to-date as it is based on SageMaker Containers which is obsolete.

Documents for Model Saving

The information on where the trained model is saved and in what format are fundamentally missing. The training script needs to save the model under /opt/ml/model and the format and sub-directory structure depend on the frameworks e,g TensorFlow, Pytorch. This is because SageMaker deployment uses the Framework dependent model-serving, e,g. TensorFlow Serving for TensorFlow framework.

This is not clearly documented and causing confusions. The developer needs to specify which format to use and under which sub-directory to save.

To use TensorFlow Estimator training and deployment:

Because we’re using TensorFlow Serving for deployment, our training script saves the model in TensorFlow’s SavedModel format.

    # Save the model
    # A version number is needed for the serving container
    # to load the model
    version = "00000000"
    ckpt_dir = os.path.join(args.model_dir, version)
    if not os.path.exists(ckpt_dir):
        os.makedirs(ckpt_dir)
    model.save(ckpt_dir)

The code is saving the model in /opt/ml/model/00000000 because this is for TensorFlow serving.

The save-path follows a convention used by TensorFlow Serving where the last path component (1/ here) is a version number for your model - it allows tools like Tensorflow Serving to reason about the relative freshness.

To load our trained model into TensorFlow Serving we first need to save it in SavedModel format. This will create a protobuf file in a well-defined directory hierarchy, and will include a version number. TensorFlow Serving allows us to select which version of a model, or "servable" we want to use when we make inference requests. Each version will be exported to a different sub-directory under the given path.

Dccuments for API

Basically the SageMaker SDK Estimator implements the CreateTrainingJob API for training part. Hence, better to understand how it is designed and what parameters need to be defined. Otherwise working on Estimators are like walking in the dark.