| Author |
Topic |
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2003-02-13 : 01:53:59
|
| ...I'm in the process of spec'ing out a new SQL server and RAID array, and I'm looking for a quick sanity check.Am I correct in thinking that RAID 10 will always dramatically outperform RAID 5 for writes, and be only marginally slower for reads, given the same number of spindles? That is, if I have 8 spindles to allocate, and the choice is either RAID 5 (for 87.5% space utilization) or RAID 10 (for 50% space utilization), I'm probably better off with RAID 10 for performance.Next question: does anyone actually set up both RAID 5 and RAID 10 and use filegroups to partition tables so write-intensive tables are on the RAID 10 array while read-intensive tables are on RAID 5? Is that actually worth the effort and increase in complexity?Thanks-b |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2003-02-14 : 17:29:21
|
| From what I can remember, you are correct. But the environments that I have worked in have always been RAID 5 due to the amount of hard drives required for RAID 10. We get very good performance with RAID 5, so we have not been able to justify the cost of a RAID 10 configuration. |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2003-02-14 : 17:37:02
|
| I think I remember someone saying that you can get away with putting TempDB on a RAID 0 or RAID 0/1 (10). I think everything else they put on a RAID 5. Put the two array's on different SCSI channel's, and that should be a nice balance between performance and cost/space.I think we use RAID 0/1 for everything though, but you are right, it's not cheap.Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda> |
 |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2003-02-14 : 17:45:21
|
| Yes I have heard the same thing although we have never implemented it. |
 |
|
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2003-02-15 : 00:33:00
|
| Hmm... I'm doing more research on this, and the more I look, the more it looks like RAID 5 is a mistake for any databases that do a reasonable number of writes.From my research, RAID 5 is very fast for reads because blocks are spread among several disks, so the effective seek time is less than that for a normal disk (5 disks seeking and supplying a sector of data are faster than 1 disk seeking amongst 5 sectors and supplying the data).However, RAID 5 really bites for writing because of the parity thing. A 5 disk array has to to four *reads* and two writes for every write -- the four reads are to compute the parity for the newly written data, and the two writes are the data itself and the updated parity info. Worse, a 10 disk RAID 5 array has to do 9 reads and two writes for every one write. Needless to say, other reads can't be going on concurrently.I'm still researching this and would very much welcome correction if I've misinterpreted something. However, the way I'm seeing it, RAID 5 is a good way to maximize capacity for mostly read-only databases, but even a few writes can kill both read and write performance.I'm starting to think that it makes sense to use filegroups to put large tables with >95% read activity on RAID 5, and everything else on RAID 1 or RAID 10 (preferably several arrays on different channels).Again, I'd welcome correction from more knowledgeable people if I'm missing something, but so far, this is what it's looking like to me.Cheers-b |
 |
|
|
Peter Dutch
Posting Yak Master
127 Posts |
Posted - 2003-02-17 : 03:09:52
|
| I think it really depends on the raid controller you use. When you have a raidcontroller with a lot of memory (and preferable battery backed up memory) the db just 'writes' to the memory of the raidcontroller, the db doesn't have to wait until data is written to all drives.The battery backup option prevents data loss in case of an unexpected shutdown.Just my €0.02 |
 |
|
|
JamesH
Posting Yak Master
149 Posts |
Posted - 2003-02-18 : 07:16:35
|
| Much of it depends on both the controller card and the disks. I have run RAID 10 on some critical servers mostly because of the redundancy it was really hard to tell if there was much performance difference as the programmers applications were inefficient at getting data into the sql server. IBM Now offers a RAID 5E on many of it's controller cards including the MX, LX, H, etc. RAID 5E does perform faster than 5 but it requires a dedicated built-in HS for that channel as you can see in the, er, table below.RAID level, Data redundancy, Physical drive capacity utilization, Read performance, Write performance, Built-in spare drive, Min. number of drives, Max. number of drivesRAID level-0, No, 100%, Superior, Superior, No, 1, 16RAID level-1, Yes, 50%, Very high, Very high, No, 2, 2RAID level-1E, Yes, 50%, Very high, Very high, No, 3, 16RAID level-5, Yes, 67% to 94%, Superior, High, No, 3, 16RAID level-5E, Yes, 50% to 88%, Superior, High, Yes, 4, 16RAID level-00, No, 100%, Superior, Superior, No, 2, 60, RAID level-10, Yes, 50%, Very high, Very high, No, 4, 60RAID level-1E0, Yes, 50%, Very high, Very high, No, 6, 60RAID level-50, Yes, 67% to 94%, Superior, High, No, 6, 60Physical drive utilization, read performance, and write performance depend on the number of drives in the array. Generally, the more drives in the array, the better the performance.The hardware is a good place to start but settings once this hardware is installed and configured can still affect performance. I know that many places I've seen are still using the 4024KB as the default allocation unit size when build the disks. SQL 2000 uses 8K pages so you're really affecting performance when upgrading a SQL 7 box without re-formatting the drives. I've done this myself and on the boxes that I have changed, there has been a considerable amount of performance improvement.Hope this helps,JamesH. |
 |
|
|
|
|
|