Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
mcfly1204
Starting Member
10 Posts |
Posted - 2011-08-10 : 14:10:47
|
When I view the performance, specifically the IO, of the server running SQL Server 2000, the read rate on the array that holds the tempdb is exceptionally high. What are some steps I can take to see what is responsible for the high read rate? I have created a .ndf for the tempdb, but I have yet to move it to a different array. Any advice would be appreciated. |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
mcfly1204
Starting Member
10 Posts |
Posted - 2011-08-10 : 14:53:26
|
This server is running with 4 CPUs. Best practices state that I should have one tempdb file per core, but I have read some mixed reviews on this. The data file, log file, and additional data file are all on the same array. Essentially, I am uncertain if I should look into the physical layout of the files, the configuration of the tempdb, or if I should look into how the tempdb is being used. |
|
|
jackv
Master Smack Fu Yak Hacker
2179 Posts |
Posted - 2011-08-13 : 06:42:49
|
Firstly , understand the workload requirements of the TempDB , and create an IO profile. You are trying to understand the IOPs requirements of your TempbDB. If you are doing Cursors, CHECKDB,Temp tables, UDF than the IOPs will be fairly high. Depending on your block size , the throughput will varyIt is easier to come up with this figure if you locate the TempDB on a separate disk to user databases. Ensuring the Recovery Model is set to SIMPLE and the Autogrow is a set figure rather than percentage growth. Jack Vamvas--------------------http://www.sqlserver-dba.com |
|
|
mcfly1204
Starting Member
10 Posts |
Posted - 2011-08-30 : 11:55:48
|
The main use of the TempDB is by temp tables. We do not use cursors, CHECKDB is only run during off hours, and UDF usage is minimal. The cluster size of the logical disk that the tempdb resides on is the default of 4kB. After a bit of reading, some are saying that it would make sense to reformat the logical disk with a cluster size of 64kB to match the default block size of SQL Server 2000. This makes sense to me in theory, as an entire block of data can be pulled from the DB as opposed to 4kB chunks at a time, thus potentially eliminating round trips. Does this seem to be a logical step to take? |
|
|
jen
Master Smack Fu Yak Hacker
4110 Posts |
Posted - 2011-09-09 : 14:29:11
|
you should download pssdiag from codeplex which when configured and run on your host machine for the sql instance will collect static and dynamic data to assist you in troubleshooting your issueon the same site, you will find sqlnexus as well that will import the data from the output of pssdiag and aggregate the results and even provide graphs and easy to understand outputHTH--------------------keeping it simple... |
|
|
|
|
|
|
|