0
votes

No bias here, but I find it hard to find anything in AWS documentation. Microsoft Azure is much easier for me.

Here is what I have now:

  • A binary classification app fully built with Python, with xgboost being the ML model. Here xgboost has a set of optimized hyperparameters obtained from SageMaker.
  • A SageMaker notebook to launch hyperparameter tuning jobs for xgboost. Then I manually copy and paste and hyperparameters into xgboost model in the Python app to do prediction.

As you can see, the way I do it is far away from ideal. What I want to do now is adding a piece of code in the Python app to initiate the hyperparameters job in SageMaker automatically and return the best model as well. That way, the hyperparameter job is automated and I don't need to do the copy and paste again.

However, I haven't been able to do that yet. I followed this documentation to install Python SageMaker API. I also have the following code that do XGBoost hyperparameter tuning in SageMaker notebook:

 def train_xgb_sagemaker(df_train, df_test):
    pd.concat([df_train['show_status'], df_train.drop(['show_status'], axis=1)], axis=1).to_csv('train.csv',
                                                                                                index=False,
                                                                                                header=False)
    pd.concat([df_test['show_status'], df_test.drop(['show_status'], axis=1)], axis=1).to_csv('validation.csv',
                                                                                              index=False, header=False)

    boto3.Session().resource('s3').Bucket(bucket, prefix).upload_file(
        'train.csv')

    boto3.Session().resource('s3').Bucket(bucket, prefix).upload_file(
        'validation.csv')

    s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')
    s3_input_validation = sagemaker.s3_input(s3_data='s3://{}/{}/validation/'.format(bucket, prefix), content_type='csv')

    print('train_path: ', s3_input_train)
    print('validation_path: ', s3_input_validation)

    # hyperparameter tuning of XGBoost - SageMaker
    sess = sagemaker.Session()

    container = get_image_uri(region, 'xgboost', 0.90 - 1)
    xgb = sagemaker.estimator.Estimator(container,
                                        role,
                                        train_instance_count=1,
                                        train_instance_type='ml.m4.xlarge',
                                        output_path='s3://{}/{}/output'.format(params['BUCKET'], prefix),
                                        sagemaker_session=sess)

    xgb.set_hyperparameters(eval_metric='auc',
                            objective='binary:logistic',
                            num_round=100,
                            rate_drop=0.3,
                            tweedie_variance_power=1.4)

    hyperparameter_ranges = {'eta': ContinuousParameter(0, 1),
                             'min_child_weight': ContinuousParameter(1, 10),
                             'alpha': ContinuousParameter(0, 2),
                             'max_depth': IntegerParameter(1, 10),
                             'num_round': IntegerParameter(1, 300)}

    objective_metric_name = 'validation:auc'

    tuner = HyperparameterTuner(xgb,
                                objective_metric_name,
                                hyperparameter_ranges,
                                max_jobs=20,
                                max_parallel_jobs=3)

    tuner.fit({'train': s3_input_train, 'validation': s3_input_validation}, include_cls_metadata=False)

    smclient.describe_hyper_parameter_tuning_job(
        HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']

    print('Please check hyperparameter tuning for best models!')
    time.sleep(4000)
    # best_model_path = 's3://{}/{}/output/{}/output/model.tar.gz'.format(bucket, prefix, tuner.best_training_job())
    return tuner.best_training_job()

So the question is how to embed this piece of code into my Python app so that I can do everything in one place? Thanks very much for any hints as I've been hanging on this problem for days!

1

1 Answers

0
votes

there is actually a python SDK call to deploy the best-performing model of a hyperparameter tuning job:

tuner.deploy()

Find the related documentation here