3
votes

We have an application which gives every impression of being bottlenecked on poor database response. Neither the database nor the application server are heavily loaded, but the application is spending considerable time making queries via ExecuteReader.

We are measuring two things: batch and statement duration in SQL Server Profiler, and the time between invoking ExecuteReader and disposing it. Consumption of the data is 'trivial' ie. simply reading data into a list of dictionaries.

Duration recorded by the Profiler is often considerably less than that recorded by the application. These queries return only a few rows (usually one) so the time taken to consume the resultset should be tiny. But we've seen instances of a 150ms query (as recorded by the profiler) being recorded as over 1700ms by the application.

Environment:

  • It's not our infrastructure.
  • It's live, so we can't attach a profiler/debugger.
  • The application server at least is virtualised. The DB is a cluster.
  • SQL Server 2008R2 and Windows Server 2008R2.

An obvious culprit is network latency, but ping indicates that latency from app to DB server is consistently less than 1ms.

The connection is reused for multiple queries, so Open()-time overhead should already have been accounted for separately. I would expect this to rule out authentication delays during ExecuteReader, but perhaps I'm wrong about that?

What other things should we investigate? What other things might ExecuteReader wait upon?

1
"The connection is reused for multiple queries" - that has me worrying - how or why is the connection being reused?Damien_The_Unbeliever
Multiple queries in the same NHibernate session, specifically.Alex Davidson
The connections are never shared between threads and we have MARS turned off (queries are only ever run consecutively). Some parts of the application depend upon temp tables, which are lost when the connection is closed, so retaining the connection across multiple queries is sometimes required. It's rare for any connection to live for more than a few seconds though.Alex Davidson
@Damien_The_Unbeliever It should simply be a SqlConnection that is opened and then multiple SqlCommands are executed using that connection without closing it between each query. This is fine as it is the same thing as multiple batches via GO in SSMS.Solomon Rutzky

1 Answers

2
votes

we've seen instances of a 150ms query (as recorded by the profiler) being recorded as over 1700ms by the application.

How, exactly, is the app recording the duration? Using StopWatch, or something of that nature? Or from the client statistics as reported by SqlConnection.RetrieveStatistics()?

Have you checked the wait type fields for the request running the query? In the sys.dm_exec_requests DMV there are four fields that have info that might be useful here: wait_type, wait_time, last_wait_type, and wait_resource. It is possible that even with the query returning the results quickly to SQL Server internally, it still needs to devote a thread to sending those results back to the caller.

Also, some things to try just to help narrow down possibilities:

  • turn off connection pooling. In the ConnectionString, use Pooling=false;. This will still allow for multiple queries on the same connection (hence won't interfere with the use of temporary tables). Yes, it will slightly increase overall time as it will require authentication for each connection, but it will rule out something getting "stuck".

  • if possible (and not already doing so), specify CommandBehavior.SequentialAccess in the call to ExecuteReader(). This prevents local caching of the entire row when calling SqlDataReader.Read() which helps when there are large string and/or binary values that you aren't even retrieving, or if there are lots of columns and you are only grabbing a few.