7
votes

In the documentation it states that Azure Table Storage partitions have a minimum speed of 500 operations/second.

If my data is partitioned correctly, would parallel operations on each of these partitions have no affect on each other?

For example, if I had to do expensive full table scans on partition A (maxing at 500 entities/second), would the performance of any operation occuring on partition B be affected?

Storage accounts have a limit of 5000 operations/second. Does this essentially mean that I can max out 10 partitions before they start to affect each others performance?

2

2 Answers

12
votes

As a general rule, you want to avoid table scans whenever possible. They are very expensive operations (esp. if you have a lot of partitions). not so much from a table stress standpoint, but they have very high aggregate latency (explained below). That said, sometimes there is simply no avoiding it.

We have updated the storage architecture and raised a bunch of the target limits.

http://blogs.msdn.com/b/windowsazure/archive/2012/11/02/windows-azure-s-flat-network-storage-and-2012-scalability-targets.aspx

Each storage account is now 20k IOPS/sec. Each partition is now 2k/sec

How partitions interact is a little subtle and depend on how they are being used (and change over time).

Azure storage has two stages - one set of servers handle ranges, the other set the actual storage (i.e. the 3 copies). When a table is cold, all of the partitions may be serviced by one server. As partitions are put under sustained stress, the system will begin to automatically spread the workload (i.e. shard) to additional servers. The shards are made on partition boundaries.

For low/medium stress, you may not hit the threshold to ever shard or only a minimal number of times. Also the access pattern will have some impact (if you are appending only, sharding won't help). Random access across all patterns will scale by far the best. When the system is rebalancing, you will get a 503 response for a few seconds and then operations will return to normal.

If you do a table scan, you will actually make multiple round trips to the table. When a query reaches the end of a partition the response will be returned with any data found (or no data if the criteria was not met) and a continuation token. The query is then resubmitted (and returned w/token) again and again until you get to the bottom of the table. This is abstracted by the SDK, but if you made direct REST calls you would see it.

From a table performance perspective, the scan would only impact the partition in which it is currently scanning.

To speed up a broad query that hits multiple partitions you could actually break it up to multiple parallel access (e.g. one thread per partition) and then coalesce in the client. Really it depends on how much data you are getting back, how big the table is, etc.

6
votes

Your observations are correct, the performance of each partition is independent. BUT.. The performance of table storage is also (mostly?) affected by the bandwidth of the VM. If you look at the Azure pricing, there is a column for 'I/O performance' and extra small and small machines have 'Low' and 'Moderate' I/O. So if a machine can only get data at 10MB/s, the performance of the table storage is largely irrelevant — also bearing in mind that virtualised storage (as part of the OS) will also use up this bandwidth.

The storage account limit of 5000/sec means that when you start hitting that level you may get timeouts on some operations. Make sure that you architect for any number of storage accounts, as if done correctly up front, it is easy to work around that performance ceiling.

If you think that you may be putting table storage under load. Make sure that you code with enough diagnostics to find where problems are, and do some transient fault handling to allow for retries.