0
votes

I am building a function app that is trigger by a queue message, reads some input files from Blob Storage, combines them and writes a new file to Blob storage.

Each time the function runs, I see a very high amount of file transactions resulting in unexpected costs. The costs are related to "File Write/Read/Protocol Operation Units".

The function has a Queue Trigger binding, three input bindings pointing to blob storage and an Output binding pointing to blob storage

The Function App is running on Python (which I know is experimental)

When looking at the metrics of my storage account I see spikes going to 50k file transactions for each time I run my function. Testing with an empty function triggered by a Queue Message, I also get 5k file transactions. Normally the function writes the output to the output binding location (which for Python function is a temporary file on the Function App storage, which is then copied back to Blob storage, I presume)

In this related question, the high costs for the storage are suspected to be related with logging. In my case logging is not enabled in the hosts.json file and I've disabled logging on the storage account also. This hasn't resolved the issue. (Expensive use of storage account from Azure Functions)

Are these values normal for an output file of 60KB and an input file of around 2MB? Is this related to the Python implementation or is this to be expected for all languages? Can I avoid this?

1
Have you solved the problem? I'm having the same issue and couldn't find any solution. I'm using v2 function app and JavascriptFVBn

1 Answers

1
votes

The python implementation in V1 functions creates inefficiencies that could lead to significant file usage. This is a known shortcoming. There is a work in progress on a python implementation for functions V2 that will not have this problem.