Author |
Topic |
GregDDDD
Posting Yak Master
120 Posts |
Posted - 2013-09-10 : 12:26:39
|
I will begin by apologizing for asking a question that has been covered several times. I have read numerous posts on this site and others regarding this topic, but I still seem to be missing something.We have a SS 05 db that is 140+ gb and a log file that is 213+ gb. The recovery model is bulk-logged. The initial size for the mdf and ldf files is 142 gb and 213 gb, respectively. This is not what it was originally. We had to restore form a back-up earlier this year and that seems to be what set the initial size. I want to prune some data and reduce the file size of the ldf, at least, but ideally both files. Backing up has become a challenge because of disk spac. I have read that using SHRINKDATABASE is not advisable because it will lead to fragmentation and ruin the indexes. Rebuilding the indexes only adds size back to the file.I thought I also read that doing a full back-up would reduce the size of the ldf file. Last night I did a full back up to an external drive (the BAK file is 291 gb), but the ldf file is still the same size. I ran a chunk of SQL to give file information before and after the back up. below is what it returned. You can see that unused space barely changed.Before Full Backup File Size Used Space Unused Space Groth % Growth MB File142616.81 121614.06 21002.75 NULL 1 Mirth213953.81 162132.06 51821.75 10 NULL Mirth_log After Full Backup File Size Used Space Unused Space Groth % Growth MB File142616.81 121654.81 20962 NULL 1 Mirth213953.81 162760.78 51193.03 10 NULL Mirth_log Why didn't the unused space change? Also, if it had changed would using shrink file have an adverse affects on the ldf file? Will I ever be able to get these files smaller that the stated initial size? Is creating a new database and moving all objects and data into it a realistic option?Greg |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2013-09-10 : 12:36:14
|
The only thing that will change the size of the ldf file is to shrink it. Run DBCC SHRINKFILE to shrink it down. This is a one-time thing. It will probably need to autogrow if you shrink it too far as it needs to be the size of the largest transaction plus taking in consideration for how often tlog backups run. Here are some of the things that affect the size: large DML transaction, rebuilding indexes, mirroring, replication, etc...A full backup has no impact on the size of the ldf file. Even a transaction log backup has no impact on its size, but a tlog backup impacts what's INSIDE the file and how much free space you have INSIDE it. Run a one-time shrink and then watch the system for a few days, especially after any index maintenance. How often do you run tlog backups?Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
|
|
James K
Master Smack Fu Yak Hacker
3873 Posts |
Posted - 2013-09-10 : 12:38:38
|
The ONLY way a log file will release space in Full or Bulk-logged recovery model is if you do a LOG backup. A full backup or differential backup wouldn't do it. So what you need to do is to take a log backup. After you do that, you should see that most of the space in the log file is unused. Then you can shrink your log file to some reasonable size. It is hard to tell what the reasonable size is. If most of the space is free, shrink it to 20 Gigs, for example. Then, take a look at how much free space is remaining. Keep watching it and you will see that the space gets used up over a period of time. Take a log backup again, and that should again free up most of the space in the log file. Doing this couple of times should give you an idea of the log file usage. Schedule a regular log backup based on this information.Don't shrink your DATA file. That can cause all kinds of performance problems. |
|
|
James K
Master Smack Fu Yak Hacker
3873 Posts |
Posted - 2013-09-10 : 12:40:42
|
Even though the information Tara posted seems to be conflicting with what I posted on the face of it, it really is not :)I was posting under the assumption that you are not doing log backups. |
|
|
GregDDDD
Posting Yak Master
120 Posts |
Posted - 2013-09-10 : 13:47:15
|
Thanks for the input. We do (I think) tlog back ups twice a day. There is a maintenance plan called TLogsbackup. I can see *.TRN files that it creates. I will run the shrink file on the ldf file and keep an eye on it. Is it ok to run shrink file on the LDF while the db is on-line, in use, and during business hours, or this is best done in off-peak hours? Why is that a one-time-only option?As for the possible reasons for growth we don't do DML transaction, rebuilding indexes, or mirroring. The database is the subscriber to one view/three tables for replication.Greg |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2013-09-10 : 14:07:28
|
Twice a day tlog backups is not enough typically. We run tlog backups EVERY 15 minutes.Shrinking can be done online.The shrink is a one-time only operation because it's a performance problem when it has to autogrow. There should be no reason to have to keep shrinking it if you have REGULAR tlog backups going.Here are the steps:1. Increase the tlog backups to AT LEAST every hour. I advise every 15 minutes.2. Wait for 1-2 tlog backups to run.3. Shrink the LDF file down with DBCC SHRINKFILE.4. Monitor the file size over a few days.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
|
|
GregDDDD
Posting Yak Master
120 Posts |
Posted - 2013-09-10 : 14:29:25
|
Great! Thank you very much.Greg |
|
|
jackv
Master Smack Fu Yak Hacker
2179 Posts |
Posted - 2013-09-11 : 01:38:43
|
First step is some Capacity Planning. Any transaction log decision depends on characteristics of the environment , therefore monitoring the nature of transactions and growth rates is essential to deciding on the size of the transaction log . Jack Vamvas--------------------http://www.sqlserver-dba.com |
|
|
GregDDDD
Posting Yak Master
120 Posts |
Posted - 2013-09-18 : 19:37:20
|
We changed to a 15 minute tlog backup last week. Today I was going to use SHRINKFILE on the log, but before I did I ran the script again to see what the unused space was. We are now down to 44393.53 MB. On 9/10 when I started this thread it was at 51193.03 MB. That means the size of the used space is growing by about 950 MB a day! Increasing the tlog backups had no effect on the growing size of used space. I assumed it would. I could do the SHRINKFILE now, but at this rate the increase in used space will mean the file will be back to over 200GB and growing. Then shrinkfile won't do anything. Is this normal? Greg |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2013-09-18 : 23:20:36
|
What does the log_reuse_wait_desc say for the database in sys.databases?Also, can you post the script you are using to view the unused info? I want to make sure it's correct.Tara KizerSQL Server MVP since 2007http://weblogs.sqlteam.com/tarad/ |
|
|
GregDDDD
Posting Yak Master
120 Posts |
Posted - 2013-09-19 : 11:24:14
|
With select log_reuse_wait_desc from sys.databases I get....NOTHINGNOTHINGLOG_BACKUPNOTHINGNOTHINGNOTHINGLOG_BACKUPNOTHINGNOTHINGNOTHINGREPLICATIONThe script I sue to get the file information is...Use Mirth-- Show Size, Space Used, Unused Space, Growth Percent or GrowthMB, and Name of all database filesselect [FileSizeMB] = convert(numeric(10,2),round(a.size/128.,2)), [UsedSpaceMB] = convert(numeric(10,2),round(fileproperty( a.name,'SpaceUsed')/128.,2)) , [UnusedSpaceMB] = convert(numeric(10,2),round((a.size-fileproperty( a.name,'SpaceUsed'))/128.,2)) , GrowthPct = case when a.status & 0x00100000 = 1048576 then a.growth else null end, GrowthMB = convert(int,round(case when a.status & 0x00100000 = 1048576 then null else a.growth /128.000 end,0)), [DBFileName] = a.namefrom sysfiles a Greg |
|
|
James K
Master Smack Fu Yak Hacker
3873 Posts |
Posted - 2013-09-19 : 11:59:04
|
Add the name also so you can see which database has which status. I wouldn't worry about the system databases, which are probably the first four. select name,log_reuse_wait_desc from sys.databases Of the remaining, I see one is held up because of replication. Is your replication working correctly? If that is broken, you should fix that. If it is the database with the LOG_BACKUP status, and assuming you are taking log backups, you wouldn't happen to be using the copy_only option, would you? |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2013-09-19 : 12:23:36
|
To add to what James said if the database in question is the one that says REPLICATION, check Replication Monitor and let us know its status. It's likely in a failed state. Both replication and mirroring hold tlog backups up as those technologies get the data to be replicated/mirrored from the tlog.Tara KizerSQL Server MVP since 2007http://weblogs.sqlteam.com/tarad/ |
|
|
GregDDDD
Posting Yak Master
120 Posts |
Posted - 2013-09-19 : 12:56:44
|
The database in question is the last one in the list with the REPLICATION. It is a subscriber of 3 tables replicated from a database on the same SS instance. As far as I can tell it seems to be fine. When I choose 'View Synchronization Status' it says 'Started Successfully' and 'Synchronization in Progress'. However, when I chose properties that Status box at the bottom is empty. There is an icon next to it, which I can't make out what it is.Greg |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2013-09-19 : 13:22:58
|
The tlog of the subscriber database should not be impacted. Expand out the replication folder in SSMS and then local publications. Do you see one that starts with the problem database's name?To check the health of replication, you would right click on the replication folder in SSMS and select Launch Replication Monitor. Then expand My Publishers and then your SQL instance. Do you see a red X? Click on the publication and then on the subscriptions. Check out the last tab. Is there a backlog?Tara KizerSQL Server MVP since 2007http://weblogs.sqlteam.com/tarad/ |
|
|
GregDDDD
Posting Yak Master
120 Posts |
Posted - 2013-09-19 : 14:12:59
|
Staus is OK. Performance is Excellent. There are no warnings or errors.However, I just found out from our net admin that the tlog backups have been failing (his words). We had changed it to every fifteen minutes but he thinks the log is too big. He changed it to every two hours. Greg |
|
|
James K
Master Smack Fu Yak Hacker
3873 Posts |
Posted - 2013-09-19 : 14:40:45
|
How is the net admin doing the transaction log backups? If he is running it using SQL Agent, look in the agent job history to see if it is failing, and why it is failing. If he is using an external scheduler, that also should give an indication of why it is failing.By the way, and forgive me for asking this, but how is he doing the transaction log backup? I am asking only because a network administrator (rather than a database administrator) seems to be involved. I hope he understands that when we talk about transaction log backup, we are not talking about taking a backup copy of the physical log file in the Windows OS. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2013-09-19 : 15:00:07
|
Changing it to every 2 hours makes no sense if he is concerned about the size. The more frequent it runs, the SMALLER it will be.Tara KizerSQL Server MVP since 2007http://weblogs.sqlteam.com/tarad/ |
|
|
GregDDDD
Posting Yak Master
120 Posts |
Posted - 2013-09-19 : 17:27:54
|
It was running only once a day (maybe twice). He speculated that it was taking too long to back up and that is why the every fifteen minutes was failing. I will work with him and keep you posted. I really appreciate your help. Greg |
|
|
GregDDDD
Posting Yak Master
120 Posts |
Posted - 2013-10-25 : 12:11:22
|
Finally got this resolved. The theory is that we had an uncommitted transaction in the distant past and the log kept growing and growing to accommodate it. We detached the database, renamed the log file, and reattached the database. We did a full backup first. We essentially abandoned the ldf file which continued to grow at a rate of about 1 GB a day. We now do hourly tlog backups and the ldl files stays under 4GB.Greg |
|
|
James K
Master Smack Fu Yak Hacker
3873 Posts |
Posted - 2013-10-25 : 12:25:28
|
Thanks for posting back. Since you have resolved the issue, this is probably only of academic interest, but you can run DBCC OPENTRAN to see open transactions. The output will show the time when the transaction was opened.I would have thought that killing the spid or a restart of the server would fix the issue if it indeed was an open transaction causing the problem. |
|
|
Next Page
|