I am trying to create my own custom Sagemaker Framework that runs a custom python script to train a ML model using the entry_point parameter.
Following the Python SDK documentation (https://sagemaker.readthedocs.io/en/stable/estimators.html), I wrote the simplest code to run a training job just to see how it behaves and how Sagemaker Framework works.
My problem is that I don't know how to properly build my Docker container in order to run the entry_point script.
I added the train.py script into the container that only logs the folders and files paths as well as the variables in the containers environment.
I was able to run the training job, but I couldn't find any reference of the entry_point script neither in environment variable nor the files in the container.
Here is the code I used:
- Custom Sagemaker Framework Class:
from sagemaker.estimator import Framework
class Doc2VecEstimator(Framework):
def create_model():
pass
- train.py:
import argparse
import os
from datetime import datetime
def log(*_args):
print('[log-{}]'.format(datetime.now().isoformat()), *_args)
def listdir_rec(path):
ls = os.listdir(path)
print(path, ls)
for ls_path in ls:
if os.path.isdir(os.path.join(path, ls_path)):
listdir_rec(os.path.join(path, ls_path))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', type=int, default=5)
parser.add_argument('--debug_size', type=int, default=None)
# # I commented the lines bellow since I haven't configured the environment variables in my container
# # Sagemaker specific arguments. Defaults are set in the environment variables.
# parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
# parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
# parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
args, _ = parser.parse_known_args()
log('Received arguments {}'.format(args))
log(os.environ)
listdir_rec('.')
- Dockerfile:
FROM ubuntu:18.04
RUN apt-get -y update \
&& \
apt-get install -y --no-install-recommends \
wget \
python3 \
python3-pip \
nginx \
ca-certificates \
&& \
rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade pip setuptools \
&& \
pip3 install \
numpy \
scipy \
scikit-learn \
pandas \
flask \
gevent \
gunicorn \
joblib \
pyAthena \
pandarallel \
nltk \
gensim \
&& \
rm -rf /root/.cache
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
COPY train.py /train.py
ENTRYPOINT ["python3", "-u", "train.py"]
- Training Job Execution Script:
framework = Doc2VecEstimator(
image_name=image,
entry_point='train_doc2vec_model.py',
output_path='s3://{bucket_prefix}'.format(bucket_prefix=bucket_prefix),
train_instance_count=1,
train_instance_type='ml.m5.xlarge',
train_volume_size=5,
role=role,
sagemaker_session=sagemaker_session,
base_job_name='gensim-doc2vec-train-100-epochs-test',
hyperparameters={
'epochs': '100',
'debug_size': '100',
},
)
framework.fit(s3_input_data_path, wait=True)
I haven't found a way to make the training job to run the train_doc2vec_model.py. So how do I create my own custom Framework class/container?
Thanks!