2
votes

I have an Azure Worker Role set up to get information from a few external web services, parse it into several different entity types, and store these entities in an Azure Table Storage. Crucially, most if not all of these entities are each inserted into their own partition in the table.

I am using a TableServiceContext-extended class with calls to AddObject(EntityToBeInserted) to attach the new entities to the tableservicecontext as they are created. Currently, I then call TableServiceContext.SaveChangesWithRetries(SaveChangesOptions.None) to save these entities to the Table in their respective partitions. This all works fine.

My question is: what happens when it doesn't work fine? I am able to cause 1+ of the entities to not be saved by making their row and partition keys non-unique, but the error message I catch around that behavior does not indicate which of the entities failed, just that there was an error in one of them.

How should I save entities to Table Storage from a Worker Role where each entity goes to its own partition (assume there are 2-30 entities being inserted in one of these save calls), so that if one or more of these inserts fail I will be able to at least know which one it was? These operations are very time-sensitive, so I cannot unfortunately rely on long-running Retry options to wait for the relevant storage nodes to become available again.

Thank you, Alex

1
The answer is that you can't do bulk inserts across partitions anyways, so the right method is to track the ids (partitionkey and rowkey) of the entities you submit for processing, and verify that the transaction was complete for each in the async save callback.user483679

1 Answers

0
votes

The answer is you can't do bulk inserts across table storage partitions, so you should track the ids (partitionkey and rowkey) of the entities you insert, and verify their success in the callback of the async table storage insert call.