Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 SQL Server Administration (2008)
 t log file size

Author  Topic 

ferrethouse
Constraint Violating Yak Guru

352 Posts

Posted - 2012-07-01 : 20:58:08
I have a database running in full recovery mode. The database contains lots of data that I'm trying to delete (about 200 gigs worth). So every Friday I take the site offline and spend about 2 hours deleting data (truncate isn't an option). This bloats the tlog size. So after I'm done I switch the recovery mode to simple, do a full backed, compress the log file and the file size drops to almost nothing from 60 gigs. I then switch it back to full recovery model and if I check the next day the log file is back up to 60 gigs. Can someone explain why that happens? There aren't 60 gigs of transaction in 24 hours. Why does it bloat back up to that size?

Thanks.

robvolk
Most Valuable Yak

15732 Posts

Posted - 2012-07-01 : 21:15:17
  1. If the log file keeps growing back to 60 GB, don't shrink it below that size.
  2. Don't shrink the log at all unless you absolutely don't have the disk space.
  3. If you're taking the system "offline", meaning nothing else will be using that database, and if you're deleting >30% of the data, then truncate is an option:
    1. bcp the data you want to keep out to files
    2. truncate the tables
    3. bcp or BULK INSERT the data back in from the files
The latter will generate far less log than deleting will. Alternately if you're using Enterprise Edition and the data you want to delete follows a pattern you can probably use table partitioning to remove it more effectively.

By the way, why are you using full recovery if you keep switching to simple?
Go to Top of Page

ferrethouse
Constraint Violating Yak Guru

352 Posts

Posted - 2012-07-03 : 11:17:09
quote:
Originally posted by robvolk

  1. If the log file keeps growing back to 60 GB, don't shrink it below that size.
  2. Don't shrink the log at all unless you absolutely don't have the disk space.



The log file is bigger than the database. Disk space is a concern. We are hosted on Rackspace and they aren't cheap.

quote:
If you're taking the system "offline", meaning nothing else will be using that database, and if you're deleting >30% of the data, then truncate is an option:
  1. bcp the data you want to keep out to files
  2. truncate the tables
  3. bcp or BULK INSERT the data back in from the files
The latter will generate far less log than deleting will.


I'm removing about 90% of the data. I'll investigate this. I don't mind doing it the current way. It is slow. I just don't understand why the log file insists on being so large. After you do a full backup the log file shouldn't stay that large (or so I thought).

quote:
By the way, why are you using full recovery if you keep switching to simple?



I only switch to simple temporarily after the deletes so that I can then shrink the log file. I thought it had to be in simple mode to cause the checkpoint to know that the log file was no longer needed.
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2012-07-03 : 11:55:43
quote:
After you do a full backup the log file shouldn't stay that large (or so I thought).
No, the log file is only cleared/emptied during a log backup, or after a checkpoint if you're using simple recovery. Even then, log files are never shrunk. The key is to manage (technically prevent) log file growth by sizing the log to match your largest transactions and backing up/checkpointing frequently enough so auto-grow doesn't kick in.
quote:
I thought it had to be in simple mode to cause the checkpoint to know that the log file was no longer needed...The log file is bigger than the database. Disk space is a concern. We are hosted on Rackspace and they aren't cheap.
The log file is always needed, but the transactions inside it can be cleared. My question was why not just leave it in simple recovery all the time? Switching from full to simple recovery breaks your log backup chains. And unless Rackspace bills you based on an average file size over a period of time, rather than the max file size, you're going to pay for a growing log file anyway.
Go to Top of Page

ferrethouse
Constraint Violating Yak Guru

352 Posts

Posted - 2012-07-03 : 15:35:39
quote:
Originally posted by robvolk

quote:
After you do a full backup the log file shouldn't stay that large (or so I thought).
No, the log file is only cleared/emptied during a log backup, or after a checkpoint if you're using simple recovery. Even then, log files are never shrunk. The key is to manage (technically prevent) log file growth by sizing the log to match your largest transactions and backing up/checkpointing frequently enough so auto-grow doesn't kick in.
quote:
I thought it had to be in simple mode to cause the checkpoint to know that the log file was no longer needed...The log file is bigger than the database. Disk space is a concern. We are hosted on Rackspace and they aren't cheap.
The log file is always needed, but the transactions inside it can be cleared. My question was why not just leave it in simple recovery all the time? Switching from full to simple recovery breaks your log backup chains. And unless Rackspace bills you based on an average file size over a period of time, rather than the max file size, you're going to pay for a growing log file anyway.



Hi Rob, thanks for taking the time to explain this to me.

Normally the transactions are quite small. I'm spending two hours each Friday to get rid of the other clients from this database. I expect another two Fridays will be required and then it is clean. So the 50 gig log file reflects this cleaning that I'm doing but generally speaking the log file should only be 100 megs tops. That is why I'm confused as to why it keeps jumping back up to 50 gigs seemingly at random.

After I switch it back to FULL recovery mode I take another backup to get the chain started again. Switching it to SIMPLE and then backing up and shrinking may be an unnecessary step. It is just something I tried in order to get that log file back down to a more normal level.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2012-07-03 : 15:56:54
It sounds like your deletes are too big. You can easily manage the tlog size without switching recovery models or shrinking the file by doing smaller delete transactions and more frequent tlog backups.

And if your file size is 60GB the next day, yes there are 60GB of transactions. Backup your tlog more frequently if you want a smaller size. The smallest size it can get to is the largest transaction though. Do you have index rebuilds occurring?

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

ferrethouse
Constraint Violating Yak Guru

352 Posts

Posted - 2012-07-04 : 16:43:46
It would appear that the "initial file size" for the tlogs is set to the largest transaction that has occurred. That is why it was getting reset up to 60 gigs on me. Since I know that 60 gigs is not normal I just need to reduce the initial size down to 1 gig after I'm done doing these deletes and problem solved.
Go to Top of Page
   

- Advertisement -