There is a sample in the Github project hosted for the tutorial you mentioned:
It is for Object Detection but the call is the same for Classification, the difference is in the content of the result (here you have bounding_box
items because object detection is predicting zones in the image):
def predict_project(prediction_key, project, iteration):
predictor = CustomVisionPredictionClient(prediction_key, endpoint=ENDPOINT)
# Open the sample image and get back the prediction results.
with open(os.path.join(IMAGES_FOLDER, "Test", "test_od_image.jpg"), mode="rb") as test_data:
results = predictor.predict_image(project.id, test_data, iteration.id)
# Display the results.
for prediction in results.predictions:
print ("\t" + prediction.tag_name + ": {0:.2f}%".format(prediction.probability * 100), prediction.bounding_box.left, prediction.bounding_box.top, prediction.bounding_box.width, prediction.bounding_box.height)
See source here: https://github.com/Azure-Samples/cognitive-services-python-sdk-samples/blob/master/samples/vision/custom_vision_object_detection_sample.py#L122