2
votes

I've been working on developing an API to serve a machine learning model using Azure Machine Learning (AML) webservice deployments on a Kubernetes target as outlined here: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#prepare-to-deploy

My particular use case requires more than simple scoring of data through the model. The API needs to have multiple endpoints to perform different actions related to the model. For example, an endpoint to upload data, an endpoint to delete data, an endpoint to list existing data, an endpoint to score previously uploaded data, an endpoint to change preprocessing parameters, etc...

I can build all of this logic, but I am struggling with the fact that AML web services only provides one endpoint (The service URI ending in "/score"). Is there a way to add more endpoints to an AML service? For example, I would like to have a way for users to be able to POST, GET, DELETE, PUT "/data", GET "/predictions", and POST, GET, DELETE, PUT "/parameters", etc..

Is there a way to do this in AML or is this not the right tool for what I am trying to accomplish? Is there a better solution within Azure that is more suited for my needs?

Thank you!

2

2 Answers

0
votes

Azure ML allows controlled rollout/traffic splitting, but doesn't directly support your API design.

I might need to know more about your use case to make a recommendation. Are you looking at implementing incremental learning? What is the motivation for separate endpoints?

-Andon

0
votes

Your proposal seems like a stateful web server which is more than a REST API service. For example, you need to keep a piece of logic to maintain "ids" of data: if there are two POST /data calls with different data, and the DELETE /data need to operate on the proper one. This is much more than a single performance optimized machine learning service.

I would recommend you creating a separate server with all these logic pieces and only reach Azure Machine Learning service whenever you need it. You could also build a cache in your service to only call Azure ML service when a new data coming in or the local cache expired. It will save you additional money from Azure :-)