Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 New to SQL Server Administration
 Strange Error at End of Log Restore

Author  Topic 

BobRoberts
Posting Yak Master

109 Posts

Posted - 2012-01-18 : 17:42:45
I have been trying to shrink the size of a log file that has grown to 27 GIG. The disk containing it only contains 117 meg free, and that's only because of a recent effort to clear up disk space. The database itself is 491 meg. I tried this process on a smaller database, and it worked (had problems and resolved them through a prior post to this forum). Now I'm trying to do the same thing: backup the database twice, then backup the log file with the "truncate the transaction log" option. If it works like the previous effort, the process should then set the log_reuse_wait column in sys.databases from 2 to 0, and the log_reuse_wait_desc column from LOG_BACKUP to NOTHING. The plan was to then run the DBCC SHRINKFILE on it.

The problem is, where to back up the log to. I ended up doing a net use within SQL 2008 to drive Z:, which points to another server that has room. When I backup the actual database, that goes fine, but when I backup the log, it runs about 28 minutes and reaches 100%, sits there for several minutes, but then shows an error box which says:

"Backup failed for Server 'SERVERNAME'. (Microsoft.SqlServer.SmoExtended)

System.Data.SqlClient.SqlError: The operating system returned the error '64(The specified network name is no longer available.)' while attempting 'FlushFileBuffers' on 'Z:\exampledatabase backup log.bak'. (Microsoft.SqlServer.Smo)"

This is the second time I tried - the first time the message seemed to be presented a little different:

"Processed 3367677 pages for database 'exampledatabase', file 'baseline_log' on file 1.
Msg 3634, Level 16, State 2, Line 1
The operating system returned the error '64(The specified network name is no longer available.)' while attempting 'FlushFileBuffers' on 'Z:\exampledatabase_log.bak'.
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally"

The actual physical log file does seem to get backed up to drive Z:. However, the sys.datbases.log.reuse.wait columns are not changed, so I assume that running a DBCC SHRINKFILE will have no effect. I'm almost inclined to think it's some kind of network timeout related to the database size, since keep in mind I did the very same thing to the Z: drive using a similar but smaller database, and it was able to clear the sys.databases log_reuse_wait columns. I really think it's probably related to the size of the database log, since the other one was "only" 395 meg.

I wonder if mapping a large USB drive and trying to back up to that would make a difference.

BobRoberts
Posting Yak Master

109 Posts

Posted - 2012-01-18 : 17:58:57
I wish to add that one solution, a radical one, may be to change the recovery_model_desc from FULL to SIMPLE. If that is done, how does one shrink the 27 GIG log file then? Can you delete the physical log file, or just run the standard DBCC SHRINKFILE on it?

Also, would the following route I found recommended elsewhere be less radical but effective in reducing the size of the log file?

USE DatabaseName
GO
DBCC SHRINKFILE(<TransactionLogName>, 1)
BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY
DBCC SHRINKFILE(<TransactionLogName>, 1)
GO
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2012-01-18 : 18:19:15
If you backup to a network location, use a UNC path instead of a mapped drive letter.

If you don’t care about the contents of the log file, do this:
Set the db to simple recovery and then set it back to full recovery to clear the transactions.
Shrink the log file to a reasonable size, like 1 GB
Then setup a maintenance plan job to do a scheduled full database backup every day, and delete the full backups after 3 days.
Kick the full backup job off manually as soon as it is setup.
Then setup a maintenance plan job to do transaction log backups every 15 minutes 24x7 to keep the size of the transaction log file under control. Set it to delete the transaction log backups after 3 days.

And read the article Gail Shaw posted a link to on your other thread. It has a very good explanation of Managing Transaction Logs.
http://www.sqlservercentral.com/articles/64582/



CODO ERGO SUM
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2012-01-19 : 04:21:14
Shrinking it is one thing, maintaining it "smaller" is another.

Your database is the size that it is because it needs to be that size it has grown to that size at some point.

Possibly it grew to that size because of a one-off huge transaction - a massive delete, for example. However, it may well be because of a recurring task - and that will grow the database back again when next it runs.

My recommendation in these situations is to check the size of your LOG Backup files. Hopefully you run them frequently - every 15 minutes is good, every hour is less good, and once a day is bad. Check the size of the log files, and the times-of-day / days-of-week when large log files are generated. Most commonly it is the Index Rebuild task that causes the largest log files.

If your largest log files are much smaller than your LDF file then shrinking may help. If not then shrinking will only be temporary until the next large transaction.

We have a separate backup task which backs up the LOG file of our active databases every 2 minutes whilst the index rebuild task runs. We also rewrote a housekeeping Purge routine, which deletes old, stale, data to work in small batches so that the normal log backups will be able to clear the log files, whereas before it was a single monolithic transaction

Note that repeated shrinking and growing your database will cause both fragmentation of the physical files, and the "tidy" alignment of data within them - e.g. an index rebuild will make a neat & tidy, contiguous, arrangement of the index; shrinking the database will force that index to be split into whatever pages are available earlier in the file.

Shrinking a file because of a one-off large transaction is fine, but do not shrink it smaller than necessary. Monitor the size for a week or two after shrinking, if it grows then that is the size it needs to be - so next time don't shrink it any smaller than that . If the size it grows to is unacceptable then look at more frequent log backups, and changing any processes that generate massive transactions (which you can detect from the presence of abnormal file-sizes in your log backups, or by putting an Alert on the Growth of the Log File)
Go to Top of Page

BobRoberts
Posting Yak Master

109 Posts

Posted - 2012-01-19 : 09:48:25
I used a mapped drive because it would not accept a UNC path, I think because it demanded a password.

I found out BACKUP LOG WITH TRUNCATE ONLY won't work in SQL 2008, and that changing the recovery mode to SIMPLE is the SQL 2008 equivalent to that. I did that, did a SHRINKFILE of the log, and it's cut the size of the file on disk from 27 GIG to 10 MB (I probably should have made it 1 GIG, however). I'll keep a watch on it and explore your additional advice below.

Thanks everyone for your help.




quote:
Originally posted by Michael Valentine Jones

If you backup to a network location, use a UNC path instead of a mapped drive letter.

If you don’t care about the contents of the log file, do this:
Set the db to simple recovery and then set it back to full recovery to clear the transactions.
Shrink the log file to a reasonable size, like 1 GB
Then setup a maintenance plan job to do a scheduled full database backup every day, and delete the full backups after 3 days.
Kick the full backup job off manually as soon as it is setup.
Then setup a maintenance plan job to do transaction log backups every 15 minutes 24x7 to keep the size of the transaction log file under control. Set it to delete the transaction log backups after 3 days.

And read the article Gail Shaw posted a link to on your other thread. It has a very good explanation of Managing Transaction Logs.
http://www.sqlservercentral.com/articles/64582/



CODO ERGO SUM

Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2012-01-19 : 12:18:43
" it's cut the size of the file on disk from 27 GIG to 10 MB "

Record each day what size it has grown back to = that will show if there are any sudden jumps in size (obviously on Day 1 its going to have grown a bit!).

If it is going to be several GB you ought to pre-create it in a special way to reduce the number of VLFs it uses, which will help performance. We'll worry about "how" once you see what size it grows back to
Go to Top of Page
   

- Advertisement -