4
votes

I have been having trouble with a database for the last month or so... (it was fine in November). (S0 Standard tier - not even the lowest tier.) - Fixed in update 5

Select statements are causing my database to throttle (timeout even). To makes sure it wasn't just a problem with my database, Ive:

  1. Copied the database... same problem on both (unless increasing the tier size).
  2. Deleted the database, and created the database again (blank database) from entity framework code-first

The second one proved more interesting. Now my database has 'no' data, and it still peaks the DTU and makes things unresponsive.

100% dtu resolution

Firstly ... is this normal?

I do have more complicated databases at work that use about 10% max of the dtu at the same level (s0). So i'm perplexed. This is just one user, one database and currently empty, and I can make it unresponsive.

Update 2: From the copy ("the one with data 10000~ records"). I upgraded it to standard S2 (5x more powerful than s0 potentially. No problems. Down-graded it to S0 again and

SET STATISTICS IO ON SET STATISTICS TIME ON select * from Competitions -- 6 records here...

SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 1 ms.

SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.

SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.

(6 row(s) affected)

Table 'Competitions'. Scan count 1, logical reads 3, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 407 ms, elapsed time = 21291 ms.

Am i miss understanding azure databases, that they need to keep warming up? If i run the same query again it will be immediate. If i close the connection and do it again its back to ~20 seconds.

Update 3: s1 level and it does the same query above for the first time at ~1 second

Update 4: s0 level again ... first query...

(6 row(s) affected)

Table 'Competitions'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 16 ms, elapsed time = 35 ms.

Nothing is changing on these databases apart from the tier. After roaming around on one of my live sites (different database, schema and data) on s0 ... it peaked at 14.58% (its a stats site)

Its not my best investigation. But im tired :D I can give more updates if anyone is curious.

** Update: 5 - fixed sort of **

enter image description here

The first few 100% spikes were the same table. After updating the schema and removing a geography field (the data was null in that column) it has moved to the later smaller peaks ~1-4% and a result time back in the very low ms.

Thanks for the help, Matt

1
What load did you put on the database? I can max out any database using no tables at all by sending SELECT NULL in a loop.usr
From the numbers CPU time = 407 ms, elapsed time = 21291 ms it looks like an S0 gets about 2% of a CPU core. It might be throttled to that amount. A P3 is 80x more powerful which would give it 1.6 cores. Sounds realistic to me.usr
Thanks usr.... load? no connections other than myself and a single direct sql query querying a single table (the example above containing 6 records, but the same occurred with 0 records). Im trying to be efficient rather than killing it. ;) I have another database on the same server which was built by Orchard CMS(contains a few tenants) which doesn't use anywhere close to the same DTU (its never peaked 20% at the same tier)? (Orchard isnt the most efficient database design either)Matthew
So you are saying that a single query (select * from Competitions) on an empty table causes maxed out DTUs and the output CPU time = 407 ms, elapsed time = 21291 ms?usr
Or, reproduce the problem in that particular DB. Then, don't change anything and involve support.; You could also try to change small things like rebuilding all indexes to see if that makes the problem go away.usr

1 Answers

3
votes

The cause of the problem to the crippling 100% DTO was a GEOGRAPHY field: http://msdn.microsoft.com/en-gb/library/cc280766.aspx

Removing this from my queries fixed the problem. Removing it from my EF models will hopefully make sure it never comes back.

I do want to use the geography field in Azure (eventually and probably not for a few months), so if anyone knows why it was causing a unexpected amount of DTU to be spent on a (currently always null) column that would be very useful for future knowledge.