Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 SQL Server Administration (2005)
 CPU Spikes

Author  Topic 

jcornell
Starting Member

11 Posts

Posted - 2010-08-26 : 05:08:50
Hi
I have a VM which runs SQL Server 2005. It has two instances (one with 54 dbs and once with 8 dbs) and the server has been allocated with 2 cores and 4GB of memory.

I have completed regular maintenance and there are still these really unusual CPU spkies all day long coming from sqlservr.exe

Has anyone seen this before and/or have any advice because I am concened about the peformance issues, I am sure that it should not spike like this!



SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2010-08-26 : 05:48:24
How long time is it between spikes?
Does the spikes correspond with a job activity, such as reindexing?



N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

jcornell
Starting Member

11 Posts

Posted - 2010-08-26 : 09:24:22
Hi
The spikes are about 15-30 seconds appart and this is not whilst any reindexing etc is happening. It is just throughout the working day.
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2010-08-26 : 09:30:12
Since you have 62 databases and only 4 GB of ram, I think you get a lot of tempdb activity and/or paging.
That can cause a massive cpu spike. What does profiler say? And performance monitor?



N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

jcornell
Starting Member

11 Posts

Posted - 2010-08-27 : 04:31:11
Do you have a suggested template for use with the SQL profiler?
Sorry it has been ages since I have used the monitoring software
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-08-27 : 13:11:26
You've likely got missing indexes. Run SQL Profiler with these events to start: RPC:Completed, SQL:StmtCompleted. Look for high reads, high CPU, and high duration. Only you and your business can determine the value of "high". At my organization, anything over 300-500 milliseconds is consider slow unless it's on our reporting environment. We also look for anything over 5000 reads. Your values will differ. You must know your baseline to know what is considered inefficient.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

dinakar
Master Smack Fu Yak Hacker

2507 Posts

Posted - 2010-08-27 : 14:20:46
if this is a production server, DO NOT run profiler connecting to it. Instead run server side traces..
Since you are running on SQL 2005 why not check out some DMVs which already have your workload metrics..

http://technet.microsoft.com/en-us/library/cc966540.aspx#EHAA



Dinakar Nethi
************************
Life is short. Enjoy it.
************************
http://weblogs.sqlteam.com/dinakar/
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-08-27 : 14:26:25
Well you can run Profiler, just make sure you save the data to a file rather than to a table: http://weblogs.sqlteam.com/tarad/archive/2008/08/06/SQL-Profiler-best-practices.aspx

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2010-08-27 : 14:32:16
I'll save it to a table on a dev server
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-08-27 : 15:33:01
russell, you should only do that after the trace has stopped. A few years back I was running a trace on a client machine for a production problem. The trace was configured to store the data to a table that was on the local machine and not on the production server. I practically brought production to its knees! Now, I NEVER store it in a table until the trace has stopped. I always use a local file on the client machine or a server-side trace.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2010-08-27 : 18:29:40
true. i'll use a server side trace --> local file if I intend to let it run a while, or if it's capturing a lot of info.

but, i'll admit to doing what i said above if it only runs a few minutes and is greatly filtered.

i will say two things, i've seen a co-worker bring a prod server to its' knees, and i myself have gone to click the stop button on traces when the GUI wouldn't respond because it was too busy recording activity to the table.

So...your point is well taken
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-08-27 : 18:36:34
The trace that I was running that brought the servers to its knees had only two events and was filtered for just the slow queries. The system was super busy, and this additional load put us over the top. Tracing to a file was not a problem though.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

jcornell
Starting Member

11 Posts

Posted - 2010-08-27 : 19:26:39
I have found out from the profiler logs that the problem was caused by an application called public access. This is used to interpret and modify planning applications before hosing them online. The transformation process is bouncing the CPU's to 100% so I have sent an email to their technical support.
Thanks for the help everyone!
Go to Top of Page
   

- Advertisement -