1
votes

According to the below diagram on https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed-processor, at least 4 partition key ranges are distributed between two hosts. What I'm struggling to understand in this diagram is the distinction between a host and a consumer. In the context of Azure Functions, would it be true to say that a host is a Function app whereas a consumer is an active/warm instance?

enter image description here

I'd like to create a setup with N many Function apps each with 0-200 active instances (depending on workload). At the same time, I'd like to read Change Feed. If I use a CosmosDBTrigger with the same connection string and lease container in each app, is this taken care of automatically or do I need a manual implementation?

1
If you're using trigger binding, do you really have to bother about load balancing? Won't Azure Function Runtime take care of that automatically (spawn required number of function instances based on traffic)? Are you using Consumption plan or one of the premium?Kashyap
Consumption. As for load balancing, ideally the runtime should take care of this, but that's what I'm trying to find out. My unique case involves running multiple Function apps (each with the same source code). There is a limit on max number of instances per app which is 200. This is to avoid that by running multiple apps, but it will also help with rolling out prod updates incrementally.user246392
Now I see your pain spread across all these posts. :-)Kashyap
I wish MS understood me as fast as you did. :)user246392
Very curious case, I don't really have an answer for you, but have you already done a PoC to see if this works? This is like multiple consumers (Function Apps) consuming change messages from same Feed, isn't it? Does that even work? Or are you building your own layer between Feed and Functions to split the Feed into multiple feeds (same as "number of Function Apps")?Kashyap

1 Answers

0
votes

The documentation you linked is mainly for the Change Feed Processor, but the Azure Functions binding actually runs the Change Feed Processor underneath.

When just using CFP, it's maybe easier to understand because you are mainly in control of the instances and distribution, but I'll try to map it to Functions.

The document mentions a deployment unit concept:

A single change feed processor deployment unit consists of one or more instances with the same processorName and lease container configuration. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.

For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.

The deployment unit in Functions is the Function App. One Function App can span many instances. So each instance/host within that Function App deployment, will act as a available host/consumer.

Further down, the article talks about the dynamic scaling and what it says is basically that, within a Deployment Unit (Function App), the leases will get evenly distributed. So if you have 20 leases and 10 Function App instances, then each instance will own 2 leases and process them independently from the other instances.

One important note on that article is, scaling enables a higher CPU pool, but not a necessarily a higher parallelism.

As the documentation mentions, even on a single instance, CFP will process and read each lease it owns on an independent Task. The problem is, all these parallel processing is sharing the same CPU, so adding more instances will help if you currently see the instance having a CPU thread/bottleneck.

Now, in your example, you want to have N Function Apps, I assume that each one, doing something different. Basically, microservice deployments which would trigger on any change, but do a different task or fire a different business flow.

This other article covers that. Basically you can either, have each Function App use a separate Lease collection (having the monitored collection be the same) or you can share the lease collection but use a different LeaseCollectionPrefix for each Function App deployment. If the number of Function Apps you will be shared the lease collection is high, please check the RU usage on the lease collection as you might need to increase it (there is a note about it on the article).