0
votes

I have current throughput rates of ~11 records/sec(1 event has 1 record) for my event-triggered Azure function app. It takes data from Eventhub and posts it to an API endpoint. I have been tweaking the host.json settings(maxBatchSize, prefetchCount) but they haven't increased the rates.

  • App runs on .NET so cant even try to increase workers manually using FUNCTIONS_WORKER_PROCESS_COUNT.
  • API endpoint has a throttle of 100 records/sec.

Below lie some test results:

Test Domain App Service Plan Instance maxBatchSize prefetchCount Avg CPU Usage Data Transferred(records) Total Time taken Throughput Rate(Records/sec)
1 Products S1 1 100 200 100 3281 0:18:31 2.95
2 Products P2V2 1 100 200 11.5 34264 0:48:57 11.67
3 SalesOrder P1V2 2 64 128 11 (for both instances) 33816 0:49:25 11.41

Appreciate your help here.

1
have to tried on (Premium) Consumption tier?silent
No, because CPU usage is just 11 for premium instances here. Didn't feel the need to. Can you please explain how (Premium) Consumption tier will help?Gautam
there are only so many parallel Functions that can run on one ASP instance. In consumption plan it can scale out over many instances at any time. I was just wondering why you didnt include Consumption tier in your testssilent
Right. I got your point about parellel instances. However, I did manually scale out to 3 instances. Throughput rate still remained the same. That is why I didn't move it from App service to consumption plan.Gautam
Is there not any other way than parallel instances to increase the throughout rate here? By tweaking host.json settings? I was under the impression, I could increase it that wayGautam

1 Answers

0
votes

One way to increase concurrency for single instance is to change maxConcurrentRequests. For example, if you had a function app with two HTTP functions and maxConcurrentRequests requests set to 25, a request to either HTTP trigger would count towards the shared 25 concurrent requests. When that function app is scaled to 10 instances, the ten functions effectively allow 250 concurrent requests (10 instances * 25 concurrent requests per instance).

Second way is to scale function app instances itself. Below is a table of scaling plan provided by Function App:

enter image description here