Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
Dev Null
Starting Member
39 Posts |
Posted - 2005-09-13 : 03:25:09
|
I'm trying to build a database that will automatically throw away old data as the database size approaches the MSDE 2GB size limit, but I have a lot of different definitions of "database size" to work with:Actual size of both data and log files (sp_spaceused's database_size).Actual size of data file only (from sysfiles).Allocated space in data file only (sp_spaceused's reserved).Any idea which, if any, of these is limited to 2GB? Or what the actual measure is thats limited, if I've missed it out?Any pointers appreciated... |
|
Kristen
Test
22859 Posts |
Posted - 2005-09-13 : 05:46:53
|
I don't know how MSDE measures the 2GB limit, but will deleting data contribute to it [short term] by increasing the size of the logs?Kristen |
|
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2005-09-13 : 07:22:45
|
I think it would be more practical to base your archiving/deleting criteria on something other than the size of the database, i.e. data more than 90 days old, or accounts that are closed, etc. Simply deleting some data just because the file is getting too big sounds very haphazard. Also, you can schedule the process and know exactly when it will run, and what it will remove. |
|
|
Dev Null
Starting Member
39 Posts |
Posted - 2005-09-13 : 19:20:51
|
I agree with you robvolk, and the normal data aging scheme will probably be based on either days or hours of data. But the data is inherently unpredictable in size that way - its traffic-dependant - and in the worst-case scenario of rapidly approaching the limit in the middle of the day, I'd rather lose old data than stop getting new stuff. |
|
|
Dev Null
Starting Member
39 Posts |
Posted - 2005-09-14 : 01:33:12
|
I seem to have answered my own question with a bit of experimenting, so I'll chuck what I found up here in case anyone else finds it useful.The 2GB limit seems to be on the total combined size of all data files only (no log files; reserved and unallocated space.) This means that you hit the limit as soon as your data file(s) try to grow beyond 2GB, wether or not you actually have tried to put 2GB worth of data in the files yet. Datafile growth strategy becomes fairly important then; using the default growth strategy of 10%, my database was bottoming out at ~1.91 GB, wasting nearly 100MB of space.I can test wether the size is approaching the limit by checking the size of the datafiles in sysfiles, just like sp_spaceused: select @bytesperpage = low from master.dbo.spt_values where number = 1 and type = 'E' select @dbsize = sum(convert(dec(15),size)) * @bytesperpage from dbo.sysfiles where (status & 64 = 0) You can check the growth strategy in sysfiles too; the number in the growth column is an integer that represents the number of 8k pages if the corresponding value of status & 0x100000 = 0 OR the percentage growth if status & 0x100000 = 1. so something like create function max_filesize_test()returns bitas begin return case when ( (select sum(size + (case status & 0x100000 when 0 then growth else size * growth / 100 end)) from sysfiles where (status & 64 = 0)) > 262144 ) then 1 else 0 endend ...should give you a 1 if the next file growth will put you over the limit, and a 0 otherwise. (so long as you only have one datafile. With more than one it will tell you if simultaneously growing all your files will put you over the limit; distinctly less useful, but I only have one data file...) |
|
|
Kristen
Test
22859 Posts |
Posted - 2005-09-14 : 06:44:39
|
"using the default growth strategy of 10%"My experience has been that this isn't very good for databases over 1GB anyway - a size needs to be chosen that doesn't cause the system to cough when it grabs the space for the expansion, and pre-initialises it. We had 20-minute dead times as the system went slow to grab another "10% extension" and the connections to the DB, from a web site, were more and more insistent as people pressed REFRESH!!Changing the DBs to "comfortable fixed size" extensions solved that problemKristen |
|
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2005-09-14 : 07:24:03
|
You know, another way to keep the autogrow tamed is simply turn it off, and set the file size to 2 GB...both initial and max size. (amazing how these simple solutions elude me) Now you don't have to check file sizes, you only have to check the actual data size using sp_spaceused, or something like:http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=53843 |
|
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2005-09-14 : 17:13:43
|
Seems a little crazy to throw away data in an unplanned manner because of limitations on database size. Why not use the standard edition of SQL Server, and let your database be the size it needs to be?Is money that huge an issue for this application? If so, maybe you should look at open source.CODO ERGO SUM |
|
|
|
|
|
|
|