I have 2 Durable Functions running on the same storage account - one has the default hub name while the other is specified in the host.json.
Each Durable Function has a function named "RunOrchestrator" and it seems that when new jobs are added to MyUtilityExecutorHub
their data is then being stored in the DurableFunctionsHubInstances
table of the other function.
This is what the host.json file looks like for the second function.
{
"version": "2.0",
"extensions": {
"durableTask": {
"hubName": "MyUtilityExecutorHub"
}
}
}
The host.json for the second function is as above when viewing in Kudu, so why are the jobs going to the wrong backing storage tables?
Edit: The easy fix for this in our scenario for the sake of never having to deal with it again is to have a storage account per function, but I'd like to get to the bottom of it!