Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
sql_er
Constraint Violating Yak Guru
267 Posts |
Posted - 2010-08-22 : 17:12:56
|
Hello,We are currently using MSSQL 2000 Enterprise Edition. Our current database size is ~ 20 GB. Our data is mostly news articles, which we keep in the database for x number of days [depending on the purge frequency per news site]. Due to purging, our data size stays pretty much static. Recently there has been a talk about scaling up - a lot - up to 100 times. This resulted in discussions of whether we should upgrade memory [currently at 7.5 GB], SQL Server [to 2008] or move to some NoSQL solution [e.g. MongoDB].As a start up, we cannot afford 2008 licenses and MongoDB [or any other NoSQL solution] is currently not a well known territory for us, so we decided to stick with MSSQL 2000 as long as we possibly can.This question goes to those who deal with large [1-2TB] databases on a daily basis. I am guessing a number of issues will crop up as the database size grows [I'm sure there are more, so please add]:- Query performance- Database jobs taking much much longer [backups, optimization, index defragmentation/index rebuild].For the query performance, we are using stored procedures only and they are optimized to use indexes only, so I am hoping they will not be affected too much, as our clients expect sub-second response time. As to other database jobs, I am guessing as data size grows, they will take much longer.Can someone dealing with large databases advise on what to look out for - what to change/prepare to be ready for 1-2 TB database size?Thanks a lot! |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2010-08-22 : 20:46:11
|
SQL Server 2000 is not a good platform for a large database, 2005 at a minimum would be recommended. To handle large databases, you should have plenty of memory and properly configured disks. Add many data files, one data file for every 60GB. We have a 1TB database that we've split up into 21 data files that span across 7 mount points (4 data files per mount point). The last 7 data files are for growth fo the system, and the first 14 are capped at 60GB. Yes maintenance jobs will take longer, so you should potentially target only those things that need to be modified. For instance, on 2000 it is not easy to determine what indexes should be rebuilt, so a common practice is to rebuild most/everything daily or weekly. This will not be feasible on a large database as it'll take hours to complete, affecting your customers. Has your company considered Standard Edition for 2008?Alternatively, you could take a look at MySql. I have no experience with it, but it is becoming very common for companies to use this even for their large databases. I believe Facebook is using it.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
|
|
Kristen
Test
22859 Posts |
Posted - 2010-08-23 : 03:31:30
|
"Has your company considered Standard Edition for 2008?"That was what I was thinking too. Enterprise Edition probably not required if using SQL 2008 (but O/S needs to be able to accommodate the necessary memory - SQL will then use all the memory that the O/S can access - so the O/S needs to be "Enterprise" rather than the SQL version."Alternatively, you could take a look at MySql"Not free though?? ... but I have no idea how it compared with MSSQL prices. |
|
|
sql_er
Constraint Violating Yak Guru
267 Posts |
Posted - 2010-09-06 : 16:00:25
|
Hello,Thanks for the advices Tara and Kristen. I have some questions based on the suggestions provided:1. For breaking data up into different data files, why was 60 GB chosen as the break point? Was there a specific reason for this number?2. For the SQL Server 2008 Standard Edition - could you elaborate as to why it would help? I've looked at the comparison chart between standard and enterprise and it seems that the standard edition is missing many features which I thought would be exactly the ones I would need when I work with large databases [e.g. data/backup compression]. I guess my question is - what makes SQL Server 2008 Standard Edition better than SQL Server 2000 Enterprise Edition when working with large databases? Is it the ability to use all the available memory (as mentioned above) or some other improved features?Thanks a lot! |
|
|
Kristen
Test
22859 Posts |
Posted - 2010-09-06 : 17:34:30
|
In SQL 2000 you had to have Enterprise Version to be able to use more memory and other aspects that were important for large databases.In SQL2008 the Standard Edition will use all memory available to operating System - so Standard Edition can support a large database. Utilising lots of memory with SQL 2000 seemed to be hard work, smoke and mirrors back in those days. I would not want to go back and fiddle about in the same way now that SQL 2008 Standard Edition can do all that for you without all the black-art fixes that used to be necessary.Whether you need the Enterprise Edition of SQL 2008 depends on whether you must have some specific feature only available in Enterprise Version - but you said you could not afford Enterprise Edition and my point was that if you used SQL 2008, rather than SQL 2000, you could probably manage with Standard Edition of SQL 2008 - whereas you would probably require Enterprise Edition is you used SQL 2000 version. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
|
|
|
|
|