0
votes

There's a bunch of Azure Functions concurrency questions here on SO but none of them addresses this specifically.

I'm wrapping my head around the scaling scenario for a HTTP-triggered Function app (runtime is Node.js if that matters). I'm coming from the AWS Lambda world where each concurrent invocation results in a new Lambda instance:

If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency.

What confuses me about the Azure Function app is this piece from their documentation:

A single function app only scales out to a maximum of 200 instances. A single instance may process more than one message or request at a time though, so there isn't a set limit on number of concurrent executions.

Under which circumstances will a single FA instance process more than one request at a time?

1

1 Answers

0
votes

This line probably refers to below information:

The host.json file in the function app allows for configuration of host runtime and trigger behaviors. In addition to batching behaviors, you can manage concurrency for a number of triggers. Often adjusting the values in these options can help each instance scale appropriately for the demands of the invoked functions.

Settings in the host.json file apply across all functions within the app, within a single instance of the function. For example, if you had a function app with two HTTP functions and maxConcurrentRequests requests set to 25, a request to either HTTP trigger would count towards the shared 25 concurrent requests. When that function app is scaled to 10 instances, the two functions effectively allow 250 concurrent requests (10 instances * 25 concurrent requests per instance).

Other host configuration options are found in the host.json configuration article.