2
votes

According to the new scalability targets for Azure, each partition in table storage is limited to 2000 entities/second.

I have been able to reach batch inserts of up to 16000 entities/second through parallelism.

For example, on my XtraLarge web role (8 CPUs, 8 cores), I am inserting 6400 entities (which is 64 separate batch inserts of 100 each) over 64 simultaneous parallel tasks.

How is this possible? Is 2000 entities/second just the minimum performance expected from a partition?

1
Was each batch using the same PartitionKey? - Herve Roggero
@Herve: Yes. I was writing to only one partition. - Dave New

1 Answers

4
votes

They are scalability targets, not limits. As you point out, it is the minimum expected performance, not the maximum. I imagine that you finding that at that particular time, on that particular network and hardware, that there is little contention for resources from other Azure customers. Also, note the size of your entities — Azure seems to perform well with small entities, and the targets will be for the maximum (1MB). Be warned — although the unexpected performance may be a useful, don't plan on it being there. Also make sure that you are coding for failure (500s and 503s) regardless of how often you hit the limits, otherwise when the storage service is underperforming, you app may begin to fail.