I am building a function app that is trigger by a queue message, reads some input files from Blob Storage, combines them and writes a new file to Blob storage.
Each time the function runs, I see a very high amount of file transactions resulting in unexpected costs. The costs are related to "File Write/Read/Protocol Operation Units".
The function has a Queue Trigger binding, three input bindings pointing to blob storage and an Output binding pointing to blob storage
The Function App is running on Python (which I know is experimental)
When looking at the metrics of my storage account I see spikes going to 50k file transactions for each time I run my function. Testing with an empty function triggered by a Queue Message, I also get 5k file transactions. Normally the function writes the output to the output binding location (which for Python function is a temporary file on the Function App storage, which is then copied back to Blob storage, I presume)
In this related question, the high costs for the storage are suspected to be related with logging. In my case logging is not enabled in the hosts.json file and I've disabled logging on the storage account also. This hasn't resolved the issue. (Expensive use of storage account from Azure Functions)
Are these values normal for an output file of 60KB and an input file of around 2MB? Is this related to the Python implementation or is this to be expected for all languages? Can I avoid this?