Currently we run resource intensive processing as a Cloud Service with a web role. The service package only contains frequently changing .NET assemblies. There's also a dependency onto a C++ DCOM server which is about one gigabytes of code and data total. That DCOM server is packed into an archive and put into blob storage. When a role instance starts its OnStart()
downloads the archive, unpacks it into local filesystem and registers the DCOM server, the .NET code then consumes the DCOM server.
It works but scaling is quite slow - it takes about two-five minutes between a scale-out operation is sent to Azure Management Service and role OnStart()
runs (and then it takes about one more minute to run OnStart()
). I heard that HyperV containers are nearly magic for scaling out - they scale nearly instantly. I also heard that Azure Service Fabric uses containers to host service instances. So I assume that Azure Service Fabric could be used to resolve our issues with slow scaling.
The question is - what can be done to that long running code in OnStart()
. Having each service instance run that code kind of defeats containers - they would scale fast, then get stuck with that initialization code.
Can Azure Service Fabric do something like dual-phase initialization where something equivalent to OnStart()
first runs in a single instance while the service is being deployed and then a fully initialized instance is just "cloned" quickly to scale the service out on demand? Can it do anything better for the described scenario?