1
votes

I have 2 Durable Functions running on the same storage account - one has the default hub name while the other is specified in the host.json.

Each Durable Function has a function named "RunOrchestrator" and it seems that when new jobs are added to MyUtilityExecutorHub their data is then being stored in the DurableFunctionsHubInstances table of the other function.

This is what the host.json file looks like for the second function.

{
  "version": "2.0",
  "extensions": {
    "durableTask": {
      "hubName": "MyUtilityExecutorHub"
    }
  }
}

The host.json for the second function is as above when viewing in Kudu, so why are the jobs going to the wrong backing storage tables?

Edit: The easy fix for this in our scenario for the sake of never having to deal with it again is to have a storage account per function, but I'd like to get to the bottom of it!

1
Sounds strange. Have you tried to specify the name via app setting (%hubnamesetting%) ? Or could you please try to specify it in the OrchestrationClient ( [OrchestrationClient(TaskHub = "%MyTaskHub%")]) just to be sure it isnˋt something with the host.json settings.Sebastian Achatz
Didn't know you could do it that way. We've it mitigated now by separate storage accounts but if I get some time I'll put together a test harness to replicate it. Thanks for the alternatives!David C

1 Answers

0
votes

from the document:

The name is what differentiates one task hub from another when there are multiple task hubs in a shared storage account. If you have multiple function apps sharing a shared storage account, you must explicitly configure different names for each task hub in the host.json files. Otherwise the multiple function apps will compete with each other for messages, which could result in undefined behavior, including orchestrations getting unexpectedly "stuck" in the Pending or Running state.