Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 SQL Server Administration (2005)
 Estimating transaction log size

Author  Topic 

talleyrand
Starting Member

35 Posts

Posted - 2008-08-20 : 11:35:13
Hello All,

To set the story, we have a SQL Server 2005 Enterprise edition running on some beefy hardware set to simple recovery. We have a process that identifies changes to data and flags it as such. New/updated records need to be added to a history table. We have 65M rows on this table currently. It's 102 columns wide and has 16 indexes (1 clustered, 14 nonclustered, 1 nonclustered unique). Current sizing is 75G of data and 47G of index space for just this one table. I believe we have ~150GB of space allocated for the transaction log.

At least once a year, we will have a sizable data change. For example, we now have 27M new records going into the table. It's a straightforward TSQL script doing the insert (INSERT INTO FOO SELECT * FROM BAR) and we've called a checkpoint before running the statement. There are no other processes accessing the database during the process. The challenge we are running into is that we are still running out of log space.

So, the two main questions I have are
1) Is there a sizing guide to estimate how much space we need to allocate (assuming no other activity in the database)
2) Do the indexes contribute to transaction log growth or are those handled differently?

I'm no DBA, just trying to help coworkers get through this. Any help, pointers, suggestions etc are welcome

sodeep
Master Smack Fu Yak Hacker

7174 Posts

Posted - 2008-08-20 : 13:45:00
Here is what happens when lots of inserts/updates are done to tables with many indexes. As you do inserts/update considering the less free space of index or data pages(assuming minimal fragmentation),it will adjust page splits to add new data which will cause fragmentation .For this reason you have to rebuild high fragmented index .It will increase your transaction log too. Since you are in Simple recovery model, you can truncated your log.Thats why it is not recommended to apply too many indexes for high inserts/update/deleted tables.
Go to Top of Page

talleyrand
Starting Member

35 Posts

Posted - 2008-08-22 : 13:57:02
Just as a followup, we ended up explicitly dropping all but the clustered index on the table before we did our insert and then recreated them. Our transaction log only grew to ~38GB so yes, the indexes were definitely contributing to our logging issue.

I'd still be interested in learning if there are any guidelines/models for estimating transaction log sizing as so far google has proved fruitless.
Go to Top of Page

sodeep
Master Smack Fu Yak Hacker

7174 Posts

Posted - 2008-08-22 : 14:25:47
We estimate 30-40% for High OLTP databases but 25-30% for Low OLTP databases. But it depends on lot of factor:
1)What recovery model are you using:
2)How high is you transaction rate
3)If you are using Bulk-insert .
Go to Top of Page
   

- Advertisement -