1
votes

The facts:

  • 1 Azure SQL S0 instance
  • a few tables one of them containing ~ 8.6 Million Rows and 1 PK

Running a Count-query on this table takes nearly 30 minutes (!) to complete.

Upscaling the instance from S0 to S1 reduces the query time to 13 minutes:

Result in SQL Management Studio after returning the count on S1

Looking into Azure Portal (new version) the resource-usage-monitor shows the following:

Azure DB resource utilization showing both queries on S0 and S1

Questions:

  1. Does anyone else consider even 13 minutes as rediculos for a simple COUNT()?
  2. Does the second screenshot meen that during the 100%-period my instance isn't responding to other requests?
  3. Why are my metrics limited to 100% in both S0 and S1? (see look under "Which Service Tier is Right for My Database?" stating " These values can be above 100% (a big improvement over the values in the preview that were limited to a maximum of 100).") I'd expect the S0 to bee like on 150% or so if the quoted statement is true.

I'm interested in experiences regarding usage of databases with more than 1.000 records or so from other people. I don't see how a S*-scaled Azure SQL for 22 - 55 € per month could help me in upscaling-strategies at the moment.

1
Are you using a "where" condition? Is your quey a "select count() from table" or a "select count() from table where condition" ?Fabrizio Accatino
No, it's just a count(*) without where.Alexander Schmidt
Indexes and statistics are up to date?Fabrizio Accatino
Yes they are. But does this really matter when doing a COUNT(*)? I would say nothing if we talk about seconds. But minutes???Alexander Schmidt

1 Answers

2
votes

Azure SQL Database editions provide increasing levels of DTUs from Basic -> Standard -> Premium levels (CPU,IO,Memory and other resources - see https://msdn.microsoft.com/en-us/library/azure/dn741336.aspx). Once your query reaches its limits of DTU (100%) in any of these resource dimensions, it will continue to receive these resources at that level (but not more) and that may increase the latency in completing the request. It looks like in your scenario above, the query is hitting its DTU limit (10 DTUs for S0 and 20 for S1). You can see the individual resource usage percentages (CPU, Data IO or Log IO) by adding these metrics to the same graph, or by querying the DMV sys.dm_db_resource_stats.

Here is a blog that provides more information on appropriately sizing your database performance levels. http://azure.microsoft.com/blog/2014/09/11/azure-sql-database-introduces-new-near-real-time-performance-metrics/

To your specific questions

1) As you have 8.6 million rows, database needs to scan the index entries to get the count back. So, it may be hitting the IO limit for the edition here.

2) If you have multiple concurrent queries running against your DB, they will be scheduled appropriately to not starve one request or the other. But latencies may increase further for all queries since you will be hitting the available resource limits.

3) For older Web/Business editions, you may be able to see the metric values going beyond 100% (they are normalized to the limits of an S2 level), as they don't have any specific limits and run in a resource-shared environment with other customer loads. For the new editions, metrics will never exceed 100%, because system guarantees you resources upto 100% of that edition's limits, but no more. This provides predictable, guaranteed amount of resources for your DB unlike Web/Business editions, where you may get very little or lot more resources at different times depending on other competing customer DB workloads running on the same machine.

Hope this helps. -- Srini