Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
steelkilt
Constraint Violating Yak Guru
255 Posts |
Posted - 2002-11-05 : 11:14:11
|
| using SHRINKFILE combined with truncate log on checkpoint, I have run CHECKPOINT and my log file will not shrink from the 256MB its original creators for some reason reserved for it, to the 15MB I've set as its "shrunk" size (the log currently occupies 7mb of space).I understand that the shrinking of LOG files does not happen immediately, but I'm eager to shrink this baby because we're doing a server reconfig and when I detach the DB I want it to be a small as possible for fast copying over the network.BOL suggests:"If the active portion of the transaction log is at the end of the transaction log file, the file cannot be shrunk. However, as soon as the active portion of the transaction log moved to the beginning of the file, the transaction log file can be shrunk."So, how do I move the active portion of my LOG to the beginning of the file?thx |
|
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2002-11-05 : 11:16:57
|
| BACKUP LOG myDatabase WITH NO_LOGAlso search SQL Team for "trucate log", there's a couple of articles that describe a thorough process for shrinking the log files. |
 |
|
|
franco
Constraint Violating Yak Guru
255 Posts |
Posted - 2002-11-05 : 11:22:09
|
| You can use a wonderful stored procedure by A.Zanevsky to shrink your t-log or you can detach your db, delete the *.ldf file and re-attach db thru sp_attach_single_file_db.This will create a new small t-log for you.Here is the procedure:use mastergoif object_id( 'sp_force_shrink_log' ) is not null drop proc sp_force_shrink_loggocreate proc sp_force_shrink_log---------------------------------------------------------------------------------- Purpose: Shrink transaction file log of the current database in Microsoft -- SQL Server 7.0 or 2000. The same code works on both platforms.---- Author: Andrew Zanevsky, AZ Databases, Inc., zanevsky@azdatabases.com---- Version: Original: 12/25/1999. Current: v8 - 12/17/2001 (2000-compatible)-- v8 fixes a bug in handling databases with multiple log files---- License: Freeware. No license fees for use or re-distribution. -- The code can only be re-distributed in its entirety, unaltered,-- including all comments. Any alterations may constitute a violation -- of author's copyrights.---- The author and his company make no guarantees and take no -- responsibility for any changes to your system caused by this-- software. Use at your own risk. ---- From the author:-- I have applied my best knowledge and professional judgement while -- creating this code. I have considered comments from multiple users -- of earlier version of the procedure and made several modifications -- as a result. The procedure has been very popular. Feel free to email-- your comments to zanevsky@azdatabases.com. I don't promise to reply -- to everyone, but I will make a reasonable effort to do so, -- especially if you encounter a problem with my procedure.-- I have seen a post by Kalen Delaney that SQL Server 2000 handles-- log truncation much better than 7.0 and it is no longer necessary to-- move the active portion of the log in order to shrink it. But I have-- received a report from one user who was unable to shrink the log in -- SQL Server 2000 using BACKUP LOG followed by DBCC SHRINKFILE, so I -- made my procedure compatible with SQL Server 2000. Enjoy!-------------------------------------------------------------------------------- @target_percent tinyint = 0, @target_size_MB decimal(15,2) = 10, @max_iterations int = 1000, @backup_log_opt nvarchar(1000) = 'with truncate_only'asset nocount onset rowcount 0declare @ver smallintset @ver = case when @@version like 'Microsoft SQL Server 7.00%' then 7 else 8 enddeclare @db sysname, @last_row int, @log_size decimal(15,2), @unused1 decimal(15,2), @unused decimal(15,2), @shrinkable decimal(15,2), @iteration int, @file_max int, @file int, @fileid varchar(5)select @db = db_name(), @iteration = 0create table #loginfo7 ( id int identity, FileId int, FileSize numeric(22,0), StartOffset numeric(22,0), FSeqNo int, Status int, Parity smallint, CreateTime datetime)create table #loginfo8 ( id int identity, FileId int, FileSize numeric(22,0), StartOffset numeric(22,0), FSeqNo int, Status int, Parity smallint, CreateLSN varbinary(48))create unique clustered index loginfo_FSeqNo on #loginfo7 ( FSeqNo, StartOffset )create unique clustered index loginfo_FSeqNo on #loginfo8 ( FSeqNo, StartOffset )create table #logfiles ( id int identity(1,1), fileid varchar(5) not null )exec( ' insert #logfiles ( fileid ) select convert( varchar, fileid ) from [' + @db + ']..sysfiles where status & 0x40 = 0x40')select @file_max = count(*) from #logfilesif object_id( 'table_to_force_shrink_log' ) is null exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )if @ver = 7 begin insert #loginfo7 ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' ) select @last_row = @@rowcount select @log_size = sum( FileSize ) / 1048576.00, @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00, @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00 from #loginfo7endelse begin insert #loginfo8 ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateLSN ) exec ( 'dbcc loginfo' ) select @last_row = @@rowcount select @log_size = sum( FileSize ) / 1048576.00, @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00, @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00 from #loginfo8endselect @unused1 = @unused -- save for laterselect 'iteration' = @iteration, 'log size, MB' = @log_size, 'unused log, MB' = @unused, 'shrinkable log, MB' = @shrinkable, 'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )while @shrinkable * 100 / @log_size > @target_percent and @shrinkable > @target_size_MB and @iteration < @max_iterations begin select @iteration = @iteration + 1 -- this is just a precaution exec( 'insert table_to_force_shrink_log select name from sysobjects delete table_to_force_shrink_log') select @file = 0 while @file < @file_max begin select @file = @file + 1 select @fileid = fileid from #logfiles where id = @file exec( 'dbcc shrinkfile( ' + @fileid + ' )' ) end exec( 'backup log [' + @db + '] ' + @backup_log_opt ) if @ver = 7 begin truncate table #loginfo7 insert #loginfo7 ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' ) select @last_row = @@rowcount select @log_size = sum( FileSize ) / 1048576.00, @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00, @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00 from #loginfo7 end else begin truncate table #loginfo8 insert #loginfo8 ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateLSN ) exec ( 'dbcc loginfo' ) select @last_row = @@rowcount select @log_size = sum( FileSize ) / 1048576.00, @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00, @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00 from #loginfo8 end select 'iteration' = @iteration, 'log size, MB' = @log_size, 'unused log, MB' = @unused, 'shrinkable log, MB' = @shrinkable, 'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )endif @unused1 < @unused select 'After ' + convert( varchar, @iteration ) + ' iterations the unused portion of the log has grown from ' + convert( varchar, @unused1 ) + ' MB to ' + convert( varchar, @unused ) + ' MB.'union allselect 'Since the remaining unused portion is larger than 10 MB,' where @unused > 10union allselect 'you may try running this procedure again with a higher number of iterations.' where @unused > 10union allselect 'Sometimes the log would not shrink to a size smaller than several Megabytes.' where @unused <= 10elseselect 'It took ' + convert( varchar, @iteration ) + ' iterations to shrink the unused portion of the log from ' + convert( varchar, @unused1 ) + ' MB to ' + convert( varchar, @unused ) + ' MB'exec( 'drop table table_to_force_shrink_log' )goHTHFranco |
 |
|
|
steelkilt
Constraint Violating Yak Guru
255 Posts |
Posted - 2002-11-05 : 16:43:34
|
| Franco,thanks for the sproc. Any tips on how to feed it with parameters re: which log to shrink, the size of the shrunken log, etc.thx |
 |
|
|
franco
Constraint Violating Yak Guru
255 Posts |
Posted - 2002-11-06 : 02:16:07
|
| Run the script from QA, you will end up in having one more sp on your master DB.When you need it, select the db you want and from QA execute the sp.That's all.Regards.Franco |
 |
|
|
frdba
Starting Member
1 Post |
Posted - 2005-06-14 : 12:50:02
|
| I've created the sp using the script below and now when I try to execute it I get the following error:Server: Msg 8115, Level 16, State 2, Line 1Arithmetic overflow error converting expression to data type datetime.We are trying to run this on a SQL 2000 database. Any ideas?Thanks |
 |
|
|
KLang23
Posting Yak Master
115 Posts |
Posted - 2005-06-14 : 14:06:27
|
| Hi,Just FYI:Because the SQL Log is essentially "circular" (it will wrap around to the beginning if there are no active or waiting-to-be backed-up transactions), and because it can extend (if there are active or waiting-to-be backed-up transcations in the front), the internal design allocates "virtual" chunks. A "shrink" of the log generally does not violate the virtual chunk boundary, which can be large depending on how the log has been extended or "grown". That's why DBCC SHRINKFILE sometimes does not shrink an apparently "empty" log down to 2MB.The procedures that have been written to help shrink down the log (there are a few floating around) essentially create a dummy table in the database with an 8K row size, and keep inserting a measured number rows trying to force the log to wrap around to the beginning, truncate the log, and shrink it . Once forced around to the beginning, the virtual chunk size can be reduced, and, if executed over and over again, the log can eventually be reduced to 1-2 MB.In a high-activity system, you probably want the log to be of a sufficient size with sufficiently-sized virtual chunks. It's nice to have a seemingly neat-and-tidy house (log), but don't overdo it at the expense of having to enlarge it again under load.Cheers. |
 |
|
|
|
|
|
|
|