31
votes

Is it possible to get a breakdown of CPU utilization by database?

I'm ideally looking for a Task Manager type interface for SQL server, but instead of looking at the CPU utilization of each PID (like taskmgr) or each SPID (like spwho2k5), I want to view the total CPU utilization of each database. Assume a single SQL instance.

I realize that tools could be written to collect this data and report on it, but I'm wondering if there is any tool that lets me see a live view of which databases are contributing most to the sqlservr.exe CPU load.

8

8 Answers

89
votes

Sort of. Check this query out:

SELECT total_worker_time/execution_count AS AvgCPU  
, total_worker_time AS TotalCPU
, total_elapsed_time/execution_count AS AvgDuration  
, total_elapsed_time AS TotalDuration  
, (total_logical_reads+total_physical_reads)/execution_count AS AvgReads 
, (total_logical_reads+total_physical_reads) AS TotalReads
, execution_count   
, SUBSTRING(st.TEXT, (qs.statement_start_offset/2)+1  
, ((CASE qs.statement_end_offset  WHEN -1 THEN datalength(st.TEXT)  
ELSE qs.statement_end_offset  
END - qs.statement_start_offset)/2) + 1) AS txt  
, query_plan
FROM sys.dm_exec_query_stats AS qs  
cross apply sys.dm_exec_sql_text(qs.sql_handle) AS st  
cross apply sys.dm_exec_query_plan (qs.plan_handle) AS qp 
ORDER BY 1 DESC

This will get you the queries in the plan cache in order of how much CPU they've used up. You can run this periodically, like in a SQL Agent job, and insert the results into a table to make sure the data persists beyond reboots.

When you read the results, you'll probably realize why we can't correlate that data directly back to an individual database. First, a single query can also hide its true database parent by doing tricks like this:

USE msdb
DECLARE @StringToExecute VARCHAR(1000)
SET @StringToExecute = 'SELECT * FROM AdventureWorks.dbo.ErrorLog'
EXEC @StringToExecute

The query would be executed in MSDB, but it would poll results from AdventureWorks. Where should we assign the CPU consumption?

It gets worse when you:

  • Join between multiple databases
  • Run a transaction in multiple databases, and the locking effort spans multiple databases
  • Run SQL Agent jobs in MSDB that "work" in MSDB, but back up individual databases

It goes on and on. That's why it makes sense to performance tune at the query level instead of the database level.

In SQL Server 2008R2, Microsoft introduced performance management and app management features that will let us package a single database in a distributable and deployable DAC pack, and they're promising features to make it easier to manage performance of individual databases and their applications. It still doesn't do what you're looking for, though.

For more of those, check out the T-SQL repository at Toad World's SQL Server wiki (formerly at SQLServerPedia).

Updated on 1/29 to include total numbers instead of just averages.

15
votes

SQL Server (starting with 2000) will install performance counters (viewable from Performance Monitor or Perfmon).

One of the counter categories (from a SQL Server 2005 install is:) - SQLServer:Databases

With one instance for each database. The counters available however do not provide a CPU % Utilization counter or something similar, although there are some rate counters, that you could use to get a good estimate of CPU. Example would be, if you have 2 databases, and the rate measured is 20 transactions/sec on database A and 80 trans/sec on database B --- then you would know that A contributes roughly to 20% of the total CPU, and B contributes to other 80%.

There are some flaws here, as that's assuming all the work being done is CPU bound, which of course with databases it's not. But that would be a start I believe.

6
votes

Here's a query that will show the actual database causing high load. It relies on the query cache which might get flushed frequently in low-memory scenarios (making the query less useful).

select dbs.name, cacheobjtype, total_cpu_time, total_execution_count from
    (select top 10
        sum(qs.total_worker_time) as total_cpu_time,  
        sum(qs.execution_count) as total_execution_count, 
        count(*) as  number_of_statements,  
        qs.plan_handle
    from  
        sys.dm_exec_query_stats qs 
    group by qs.plan_handle
    order by sum(qs.total_worker_time) desc
    ) a
inner join 
(SELECT plan_handle, pvt.dbid, cacheobjtype
FROM (
    SELECT plan_handle, epa.attribute, epa.value, cacheobjtype
    FROM sys.dm_exec_cached_plans 
        OUTER APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
     /* WHERE cacheobjtype = 'Compiled Plan' AND objtype = 'adhoc' */) AS ecpa 
PIVOT (MAX(ecpa.value) FOR ecpa.attribute IN ("dbid", "sql_handle")) AS pvt
) b on a.plan_handle = b.plan_handle
inner join sys.databases dbs on dbid = dbs.database_id
1
votes

I think the answer to your question is no.

The issue is that one activity on a machine can cause load on multiple databases. If I have a process that is reading from a config DB, logging to a logging DB, and moving transactions in and out of various DBs based on type, how do I partition the CPU usage?

You could divide CPU utilization by the transaction load, but that is again a rough metric that may mislead you. How would you divide transaction log shipping from one DB to another, for instance? Is the CPU load in the reading or the writing?

You're better off looking at the transaction rate for a machine and the CPU load it causes. You could also profile stored procedures and see if any of them are taking an inordinate amount of time; however, this won't get you the answer you want.

1
votes

With all said above in mind.
Starting with SQL Server 2012 (may be 2008 ?) , there is column database_id in sys.dm_exec_sessions.
It gives us easy calculation of cpu for each database for currently connected sessions. If session have disconnected, then its results have gone.

select session_id, cpu_time, program_name, login_name, database_id 
  from sys.dm_exec_sessions 
 where session_id > 50;

select sum(cpu_time)/1000 as cpu_seconds, database_id 
 from sys.dm_exec_sessions 
group by database_id
order by cpu_seconds desc;
0
votes

Take a look at SQL Sentry. It does all you need and more.

Regards, Lieven

0
votes

Have you looked at SQL profiler?

Take the standard "T-SQL" or "Stored Procedure" template, tweak the fields to group by the database ID (I think you have to used the number, you dont get the database name, but it's easy to find out using exec sp_databases to get the list)

Run this for a while and you'll get the total CPU counts / Disk IO / Wait etc. This can give you the proportion of CPU used by each database.

If you monitor the PerfMon counter at the same time (log the data to a SQL database), and do the same for the SQL Profiler (log to database), you may be able to correlate the two together.

Even so, it should give you enough of a clue as to which DB is worth looking at in more detail. Then, do the same again with just that database ID and look for the most expensive SQL / Stored Procedures.

0
votes

please check this query:

SELECT 
    DB_NAME(st.dbid) AS DatabaseName
    ,OBJECT_SCHEMA_NAME(st.objectid,dbid) AS SchemaName
    ,cp.objtype AS ObjectType
    ,OBJECT_NAME(st.objectid,dbid) AS Objects
    ,MAX(cp.usecounts)AS Total_Execution_count
    ,SUM(qs.total_worker_time) AS Total_CPU_Time
    ,SUM(qs.total_worker_time) / (max(cp.usecounts) * 1.0) AS Avg_CPU_Time 
FROM sys.dm_exec_cached_plans cp 
INNER JOIN sys.dm_exec_query_stats qs 
    ON cp.plan_handle = qs.plan_handle
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
WHERE DB_NAME(st.dbid) IS NOT NULL
GROUP BY DB_NAME(st.dbid),OBJECT_SCHEMA_NAME(objectid,st.dbid),cp.objtype,OBJECT_NAME(objectid,st.dbid) 
ORDER BY sum(qs.total_worker_time) desc