I’m trying to understand how scaling works using azure functions. We’ve been testing with an app that generates 88 messages in a storage queue, which triggers our function. The function is written in c#. The function downloads a file, performs some processing on it (it will eventually post it back but we aren’t doing that yet for testing purposes). The function takes about 30 seconds to complete per request (total ~2500 seconds of processing). For testing purposes we loop this 10 times.
Our ideal situation would be that after some warming, Azure would automatically scale the function up to handle the messages in the most expedient way. Using some sort of algorithm taking into account spin up time, etc.. Or just scale up to the number of messages in the backlog, with some sort of a cap.
Is this how it is supposed to work? We have never been able to get over 7 ‘consumption units’. And generally take about 45 minutes to process the queue of messages.
Couple of other question re scalability… Our function is a memory intensive operation, how is memory ‘shared’ across scaled instances of a function? I ask because we are seeing some out of memory errors, that we don’t normally see. We’ve configure for the max memory for the function (1536MB). Seeing about 2.5% of the operations failing from an out of memory error
Thanks in advance, we’re really looking to make this work as it would allow us to move a lot of our work off of dedicated windows VMs on EC2 and onto Azure functions.