0
votes

I have data going from my system to an azure iot. I timestamp the data packet when I send it.Then I have an azure function that is triggered by the iothub. In the azure function I get the message and get the timestamp and record how long it took the data to get to the function. I also have another program running on my system that listens for data on the iothub and records that time too. So most of the time, the time in the azure function is in millisecs, but sometimes, I see a large time for the azure function to be triggered(I conclude it is this because the program that reads from the iot hub shows that the data reached the iot hub quickly and there was no delay).

Would anybody know the reasons for why azure function might be triggering late

1
what is a large time delay, minutes? Do you have an example invocation id you could share (along with a timestamp of around when it happened?)Chris Anderson-MSFT

1 Answers

0
votes

Is this the same question that was asked here? https://github.com/Azure/Azure-Functions/issues/711

I'll copy/paste my answer for others to see:

Based on what I see in the logs and your description, I think the latency can be explained as being caused by a cold-start of your function app process. If a function app goes idle for approximately 20 minutes, then it is unloaded from memory and any subsequent trigger will initiate a cold start. Basically, the following sequence of events takes place:

  1. The function app goes idle and is unloaded (this happened about 5 minutes before the trigger you mentioned).
  2. You send the new event.
  3. The event eventually gets noticed by our scale controller, which polls for events on a 10 second interval.
  4. Our scale controller initiates a cold-start of your function app. This can add a few more seconds depending on the content of your function app (it was about 6 seconds in this case).

So unfortunately this is a known behavior with the consumption plan. You can read up on this issue here: https://blogs.msdn.microsoft.com/appserviceteam/2018/02/07/understanding-serverless-cold-start/. The blog post also discusses some ways you can work around this if it's problematic for your scenario.