We are currently working on making an Azure MachineLearning Studio experiment operational.
Our most recent iteration has a webjob that accepts a queue message, gets some data to train the model, and consumes the ML Experiment webservice to put a trained model in a blob location.
A second webjob accepts a queue message, pulls the data to be used in the predictive experiment, gets the location path of the trained .ilearner model, and then consumes THAT ML Experiment webservice.
The data used to make the predictions is passed in as an input parameter, and the storage account name, key, and .ilearner path are all passed in as global parameters--Dictionary objects defined according to what the data scientist provided.
Everything appears to work correctly--except in some cases, the predictive experiment fails, and the error message makes it clear the wrong .ilearner file is being used.
When a non-existent blob path is passed to the experiment webservice, the error message reflects there is no such blob, so it's clear the webservice is at least validating the .ilearner's existence.
The data scientist can run it locally, but has to change the name of the .ilearner file when he exports it locally through PowerShell. Ensuring each trained model has a unique file name did not resolve this issue.
All files, when I view them in the Azure Storage Explorer, appear to be getting updated as expected based on last-modified dates. It's almost like there's a cached version of the .ilearner somewhere that isn't being overridden properly.