Author |
Topic |
chulheekim
Starting Member
46 Posts |
Posted - 2010-08-31 : 15:24:05
|
I'm getting a lot of complaints about slow response from the web users. I was blaming all the sql jobs that run every 5 min to bring data from external systems. To prove it, I ran a trace with the sql jobs on and another with the sql jobs off. Unfortunately, I see not much difference in Duration, Reads, Writes, and CPU time. I'm not sure whether these columns are right places to look at to compare. Is it because it will take exactly same amount of resource no matter how busy or idle the sql server is. However, I thought at least the duration time for the Requests from the web should get shortened. Or is the trace only useful for finding costly queries, not comparing the performances? |
|
russell
Pyro-ma-ni-yak
5072 Posts |
Posted - 2010-08-31 : 16:10:46
|
you need to look for blocking while the jobs are running. also high cpu utilization, disk i/o, page life expectancy.use perfmon to review these counters, sp_who2 (or sysprocesses) to check for blocking |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
chulheekim
Starting Member
46 Posts |
Posted - 2010-09-01 : 10:21:15
|
Is there any other event I should capture? I'm only capturing sp:stmtCompleted, sql;stmtCompleted, and rpc:stmtCompleted. I'll run two traces again today with sql jobs on and off. I need to have other events that give different numbers. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
russell
Pyro-ma-ni-yak
5072 Posts |
Posted - 2010-09-01 : 13:40:23
|
if you are testing to see if the problem is the jobs running every 5 minutes, then i don't think the trace is going to give you what you're after (depending on what these jobs do).the cpu time and reads/writes won't be significantly different between when the query runs with/without the job(s) running.you've locked onto the right items to capture, but it won't tell u if the problem is the jobs or not.still, you can look at the trace results and look to optimize the most expensive queries. i'll usually measure cost as (reads * executions) or (reads + writes) * executions. then as (cpu * executions) etc.but again, if the jobs are causing the slowdown, you need to look at the perfmon counters and for blocking. perhaps you're experiencing memory, cpu, or i/o pressure. profiler won't answer those questions for you. |
|
|
chulheekim
Starting Member
46 Posts |
Posted - 2010-09-01 : 14:31:15
|
I also ran the Perfmon with those default counters. I saw some lengthy disk queue. What is the other popular counters you would add for performance issue.Thank you so much again. |
|
|
russell
Pyro-ma-ni-yak
5072 Posts |
Posted - 2010-09-01 : 16:17:23
|
I'll start with:% ProcessorAvg Disk Queue Length (and/or avg disk READ queue length)Page Life Expectancy (PLE)Total Server MemoryTarget Server MemoryBuffer Cache hit ratio2 quick notes on these...(1) I'll watch PLE over time. You should see some very large numbers here. If it is dropping over time then you may be experiencing memory pressure.(2) Cache hit ratio should always be over 99%. But just because it is doesn't mean you have no memory pressure (PLE is a better measure) but if it isn't, then you very likely have memory pressure. In other words Cache hit ratio isn't a fair measure of good performance, but it can indicate poor performance. |
|
|
russell
Pyro-ma-ni-yak
5072 Posts |
Posted - 2010-09-01 : 16:19:18
|
by the way, what do these jobs do? if they are importing data you should expect high i/o utilizationalso, how much RAM, and how many processors (and what is their speed).is anything other than SQL running on the box? |
|
|
|