Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2005-06-13 : 13:46:58
|
| Hey all,I'm about to recieve a bunch of new fibre channel disks for one of my SANs, and I'm trying to think of the best way to split things up.Right now, I've got the following:(2) 73GB RAID 1 SQL TXLogs(4) 73GB RAID 0/1 SQL DataAll of this is in one trayI'm getting another tray of disks, and I'm getting another four disks to go into the tray where these existing drives are going.So, I'm thinking about creating a 4-6 disk RAID 1/0 set for TempDB, a 4 disk RAID 1/0 set for TX Logs, and then a 10 disk RAID 1/0 set for SQL Data.My problem here is that I don't know how to find out if TempDB is bottlenecked. How do I determine if that database needs it's own set of disks?? Should I just create a big 14 disk RAID 0/1 set for all SQL Data and TempDB?Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda> |
|
|
X002548
Not Just a Number
15586 Posts |
Posted - 2005-06-13 : 13:56:48
|
| I don't know.But doesn't it make a lot of sense to dedicate that to temp db?I mean if all the requests are on the same channel, wouldn't there be some kind of wait state, which you would have then eliminated by your suggested configuration?Never did much with hardware...sorryBrett8-)Hint: Want your questions answered fast? Follow the direction in this linkhttp://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2005-06-13 : 14:24:34
|
| Yeah, that's what I was thinking. We do a lot of Temp Tables, Table Variables and a lot of sorting, so I was thinking a Fast TempDB should make things faster.Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda> |
 |
|
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2005-06-13 : 17:21:31
|
| I have always been skeptical of the approach of different arrays for database data files and tempdb data file, because it is really hard to determine the usage that tempdb is getting, and how much of a bottleneck it is. Also, the workload may vary by time of day, over the life of the application, etc. You usually really don’t know this in advance, so you have to make your best guess about what is most likely to work well.I prefer the approach of trying to balance the workload over many identical arrays. For example, create three RAID 0/1 arrays of 7 disks each, and put a data file from each filegroup for each database and tempdb on each array. It seems to prevent any one array from becoming a hot spot, and you end up with a configuration that spreads the IO workload across the arrays without too much attention to tuning.As always, your mileage may vary.CODO ERGO SUM |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2005-06-13 : 17:42:25
|
Michael, a few questions:1. 7-disk RAID 1/0 array? I'm guessing that's 6 + 1 Hot Spare right?quote: For example, create three RAID 0/1 arrays of 7 disks each, and put a data file from each filegroup for each database and tempdb on each array.
2. So, you create three data files and three file groups for your database, and put one data file per array. I think I understand that, but I'm not sure about the TempDB. Do you create three datafiles for TempDB as well? Does SQL Server strip it's writes across the three datafiles?When creating your objects, your specify which filegroup to create them on correct? How do you determine which filegroup something shoudl reside?Thanks!!Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda> |
 |
|
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2005-06-13 : 18:22:19
|
On 1. I was just giving the 7-disk RAID 1/0 array as an example. It could be 6+1. I wasn't trying to say that is an ideal configuration; I just picked is as a sample of three arrays that were close in size to the conguration you were talking about.On 2. You might just create one filegroup, say DATA, and add one file on each array to that filegroup. You specify the filegroup a table or index will be created on, and SQL Server takes care of distributing it across the files in that filegroup on a round-robin basis. I would create datafiles on each array for tempdb as well. You do not have to create new filegroups, you can add additional files to the default Primary filegroup. I prefer to create a new filegroup for user databases, and just add files to the Primay filegroup for tempdb. When you create a new filegroup, you may want to set it as the default filegroup, so that new tables go there automatically.You should read the SQL Server BOL to make sure you have a complete understanding of how SQL Server uses filegroups before you implement this. You will be the one living with whatever configuration you pick.quote: Originally posted by MichaelP Michael, a few questions:1. 7-disk RAID 1/0 array? I'm guessing that's 6 + 1 Hot Spare right?quote: For example, create three RAID 0/1 arrays of 7 disks each, and put a data file from each filegroup for each database and tempdb on each array.
2. So, you create three data files and three file groups for your database, and put one data file per array. I think I understand that, but I'm not sure about the TempDB. Do you create three datafiles for TempDB as well? Does SQL Server strip it's writes across the three datafiles?When creating your objects, your specify which filegroup to create them on correct? How do you determine which filegroup something shoudl reside?Thanks!!Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda>
CODO ERGO SUM |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2005-06-13 : 19:34:10
|
| Ok, question about the multiple data file thing.Currently, I've got a 15GB database or so that is in one MDF.If I do a backup of the database, and then restore to a database that has two data files, will SQL server round-robin the tables into the two different datafiles automatically?I'm doing some reading in the BOL now, so hopefully that will answer some of my questions.Thanks for your replys thus far Michael. I really appreciate them!Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda> |
 |
|
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2005-06-13 : 22:33:44
|
No, when you do a restore, you get the same file sizes you started with.You can add a new database file to your existing file group, and SQL Server wll allocate data into that file on a round robin basis, based on available space in the files. If you do a weekly DBCC Reindex it will tend to balance the data between the files over time as it rebuilds the indexes.You can also allocate new files to a filegroup, and move the data out of an existing file to the new files using the DBCC SHRINKFILE command with the EMPTYFILE option. Once the old file is empty, you can remove it with the ALTER DATABASE database REMOVE FILE command. You can do this while the database is active. I have done this in order to move a database to a different disk while it was being used. It does take a while to empty the old file, and you should make sure that the new files are big enough to hold all the data in the old file with some room to spare.As I said before, read up on all this so you understand it before you implement it. If you can, try some of these things on a test database and server beore you do them on a production system. Nothing like some experience to give you feel for what is going on and how it will work.As always, play safe.quote: Originally posted by MichaelP...If I do a backup of the database, and then restore to a database that has two data files, will SQL server round-robin the tables into the two different datafiles automatically?...
CODO ERGO SUM |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2005-06-13 : 22:42:30
|
| Yeah, I'm quickly comming to the realization that I need to do a BUNCH of testing before I do this in production. Figuring out what is going to work best for me is going to be key. I have to balance out giving my developers control with giving them ease of use. Something tells me I'm gonna have a long discussion with them.I'm thinking I want to make 2-4 datafiles all in the same filegroup. Then, down the road when we grow into a bigger SAN, I can just move the files to another disk array. SQL Server handles the "load balancing" and I don't have to educate my developers on the hardware.Thanks again for all the info. It's been very educational!Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda> |
 |
|
|
mcrowley
Aged Yak Warrior
771 Posts |
Posted - 2005-06-17 : 09:41:18
|
| In order to determine if you have a tempdb bottleneck, you may want to monitor the lastwaittype and waitresource columns in sysprocesses. If you see a lot of PAGEIOLATCH_SH and 2:1:* (where * is a single digit) then yes, you have a bottleneck. If I remember correctly, 2:1:2 is the global allocation page for tempdb...or was it 2:1:3? Have a look, anyway. Good luck. |
 |
|
|
|
|
|
|
|