3
votes

I know that it is possible to create a Docker container based on the Azure Functions runtime. An example of this process is described in this article.

The benefit is that Azure Functions can be used anywhere - I could deploy the Container to AWS if I wanted.

But here's where it's becoming unclear to me: When you create a new Functions app in Azure portal, there’s a switch labeled “Publish” and it allows the select either “Code” or “Docker Container”.

If I select "Docker Container", I can configure a Docker image to be used. This is documented in Microsoft's docs.

My questions are:

  1. Why would I want to deploy a Docker container, that contains the Functions runtime into a Functions App, instead of just deploying it to Azure Container Instances?
  2. How does the container approach affect scaling? Who is responsible for scheduling and executing the functions? The runtime in the container, or the Functions runtime on Azure?
1

1 Answers

0
votes
  1. There are a couple of advantages of using a docker container

    • No sandbox (windows plans only), no limitations
    • Guaranteed to work since the same image would be used for tests, staging and production (Not that normal deploy won't but there are things like a different version of Node.JS for example)
    • For languages like Python, there are cases where external dependencies need to built (C++ libraries, etc.) and with containers you can guarantee that everything works as expected since the container would have been tested already (and just built once)
    • Auto Scale. When deployed to Azure Container Instances, you function app won't have the same dynamic scale as compared to when deployed to a function app.
  2. There are 2 main things to understand - invocations and instances.

    • Each instance, i.e., the functions run time and in this case a container, can handle many invocations depending on the CPU/RAM it has
    • Number of instances is scaled up by Azure based on the events coming in like HTTP Requests, Queue Messages, etc.
    • When running in kubernetes, the scale is done by solutions like KEDA / KNative / HPA or similar. Also, using AKS Virtual Nodes (note limitations), you can scale without having to add "real" nodes to your AKS cluster.