Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 huge Log files

Author  Topic 

khalik
Constraint Violating Yak Guru

443 Posts

Posted - 2002-04-02 : 03:53:36


hi

i have sql 2000....

i am working on telecom project where i load huge data of calls and i do the rating.... the problem i have is....
my transaction log file size go up 14 gb and my server slow down...
currently i stop my process now and then and try to truncate log file and shrink database.... but i dont see any change in size...

is there any way that this log file doesnt get created or can it be a small size....

does the log file size effect the performances

thanks in advance



======================================
Ask to your self before u ask someone

Nazim
A custom title

1408 Posts

Posted - 2002-04-02 : 04:00:55
you can check the Check the "Truncate log on Check point" option thru EM or can check sp_dboption in BOL to play with database options.




--------------------------------------------------------------
Go to Top of Page

anees6_mind
Starting Member

3 Posts

Posted - 2002-04-02 : 04:27:45
the best thing you can do is on a regular basis during the night times when there are no transactions bring the databases which have huge log files offline
This can be done by right clicking on th Database name in enterprise manager and then detach the database again you can do this by right click on the database name in EM.
Delete the log file .
go to Enterprise Manager and Select Database Heading on the Node Tree and right click and say attach database and attach the .MDF file when SQL cannot find the log file it will ask whether it can create a new log file click yes and your work is done.
this is one of the fastest way of clearing huge log files.

Go to Top of Page

anees6_mind
Starting Member

3 Posts

Posted - 2002-04-02 : 04:27:54
the best thing you can do is on a regular basis during the night times when there are no transactions bring the databases which have huge log files offline
This can be done by right clicking on th Database name in enterprise manager and then detach the database again you can do this by right click on the database name in EM.
Delete the log file .
go to Enterprise Manager and Select Database Heading on the Node Tree and right click and say attach database and attach the .MDF file when SQL cannot find the log file it will ask whether it can create a new log file click yes and your work is done.
this is one of the fastest way of clearing huge log files.

Go to Top of Page

khalik
Constraint Violating Yak Guru

443 Posts

Posted - 2002-04-02 : 05:00:45


thanks a lot .....

i system is only for billing so one in a month i load the data from file and then do billing after that only reterival only....
my job of loading files is for 5-8 hours...
around 720 file (size 10-15 mb)
so there is no point of truncating the log in night and if i stop my process after few files and do the truncate log through EM it is not effecting i tried it aleast 10 times but no effect....

on some other database as trail i have seen the repeated truncate log operation only reduce the size... is it so ..... correct me
and a better solution ......

thanks

======================================
Ask to your self before u ask someone
Go to Top of Page

Nazim
A custom title

1408 Posts

Posted - 2002-04-02 : 07:46:19
khalik , you can follow anees posts and get a new log with minimum size , then set the "Truncate Loge on Check point" option . it should take care of your log for future transaction.

Would suggest you to read BOL for furthur information on logs.

HTH

--------------------------------------------------------------


Edited by - Nazim on 04/02/2002 07:49:06
Go to Top of Page

khalik
Constraint Violating Yak Guru

443 Posts

Posted - 2002-04-03 : 05:55:54


thanks Nazim.... and anees
i am not a dba. so may be i may be silly please bare with me..
both of u said is ok when log file grows i can do it if for a long time but my case i have to run my job for 5-8 hours... so cannot break / stop the process ....
my database properties
Truncate Loge on Check point is checked
Auto shrink is checked
but does not help me.....

and one more issue...
i dont have a back up and i lost some data... how can i make use of transaction log... and rollback or restore the data...



======================================
Ask to your self before u ask someone
Go to Top of Page

izaltsman
A custom title

1139 Posts

Posted - 2002-04-03 : 08:57:34
quote:

Truncate Loge on Check point is checked
Auto shrink is checked
but does not help me.....



Are you doing everything inside one huge transaction? If so, I would strongly suggest re-writing your code to batch things up and commit each batch separately. SQL Server can not truncate portions of the log that contain uncommited transactions.

quote:

i dont have a back up and i lost some data... how can i make use of transaction log... and rollback or restore the data...



If you had your log set to tuncate on checkpoint, then you are screwed! Be sure to perform backups regularly from now on.

---------------
Strong SQL Developer wanted in the Boston area. Please e-mail if interested.

Edited by - izaltsman on 04/03/2002 08:58:24
Go to Top of Page

khalik
Constraint Violating Yak Guru

443 Posts

Posted - 2002-04-04 : 01:19:58


thanks izaltsman

i have a huge base and i have several insert and updates... no transaction.... (no begin and commit transaction)

the base is 2 million and one field update is generation 120 mb of log file... and i have several... this batch run for 5--8 hours i cannot stop it.....

the next one i asked was how can i use transaction log... i know if i dont have a complete backup and my log is truncate i am screwed!

what i want to know is... how can i make use of log file...
or how is the log file use ful....

======================================
Ask to your self before u ask someone
Go to Top of Page

khalik
Constraint Violating Yak Guru

443 Posts

Posted - 2002-04-04 : 02:57:17


can any one please help me out... wating wating

developers zone is good we get fast reply
adminstrators are slow....


======================================
Ask to your self before u ask someone
Go to Top of Page

Nazim
A custom title

1408 Posts

Posted - 2002-04-04 : 03:15:48
The transaction helps keeping the r data consistent by guaranteeing Atomicity. If the system crasher or you have been stopped in in the middle of one or more transactions, when the SQL Server service restarts,it performs automatic recovery , searching each database’s transaction log and determining which transactions to complete, or "roll forward," and which to cancel, or "roll back."






--------------------------------------------------------------
Go to Top of Page

khalik
Constraint Violating Yak Guru

443 Posts

Posted - 2002-04-04 : 04:08:56


sorry about being inpatient....

so all that is done by sql server can we control it..
and to be more clear when i run a batch and i am using transactions
this is done correct if i am wrong....

and the most problem creating the huge log file how the hell can i stop it growing... it is growing gb
i read some where that we need to do the truncate log and shrink log several time then only sql server does it... correct it if not....
and if so why is it

thanks

======================================
Ask to your self before u ask someone
Go to Top of Page

Nazim
A custom title

1408 Posts

Posted - 2002-04-04 : 07:57:27
As Izaltsman suggested you have to be work on ur overall design of ur system. Most importantly should divide your transactions into batches. you can use SET ROWCOUNT and play with it to divide your transactions into reasonalbe batches. where in when they get commited there by trancating your log .




--------------------------------------------------------------
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2002-04-04 : 08:50:27
quote:
can any one please help me out... wating wating

developers zone is good we get fast reply
adminstrators are slow....

Ask to your self before u ask someone

Then you should spend your time in the developer's zone Khalik. Take a look at this thread and you'll see that WE ARE HELPING YOU, as best we can. Ask "to your self" if you've read up on transaction logs in Books Online, because it will answer your question very well and very quickly. Notice how Nazim recommended you check Books Online, in the very first reply. I don't know how you consider that "slow".

And if developer's zone is giving you a response, why are you posting here anyway? Unless their advice doesn't help you...

For future reference, do not put anything in your posts that you are "waiting for a reply". None of us are paid to help you. We do this out of the kindness of our hearts and our willingness to help and educate others. We are TAKING TIME AWAY FROM OTHER THINGS to help you out. We're not obligated to you in any way. And we can only help those who help themselves. I recommend you take your signature line to heart before you post here on SQL Team.

All of the advice that's in this thread is sound and will alleviate your problem if you follow it. You can also search SQL Team for "transaction log" and you'll get even more advice. Shit, I can even help you with that:

http://www.sqlteam.com/SearchResults.asp?SearchTerms=transaction+log
http://www.sqlteam.com/FilterTopics.asp?TopicID=116

Go to Top of Page

khalik
Constraint Violating Yak Guru

443 Posts

Posted - 2002-04-04 : 23:54:57


hi

sorry robvolk... no hard feeling... infact i have gone through bol and not able to make out then i posted it... when i say it i have to mean it.. and i very well know it... bye the u guys are doing good ... great guys... i love sqlteam u guys have help me a lot and intractive help is always best.. so people prefer forums than bol..

ok comming to the problem one of my friends gave me a script which run several times and reduces the log file size... why is so...
there goes the script...

SET NOCOUNT ON
DECLARE @LogicalFileName sysname,
@MaxMinutes INT,
@NewSize INT

-- *** MAKE SURE TO CHANGE THE NEXT 4 LINES WITH YOUR CRITERIA. ***
USE talkgroup_0302 -- This is the name of the database
-- for which the log will be shrunk.
SELECT @LogicalFileName = 'Talkgroup_0302_Log', -- Use sp_helpfile to
-- identify the logical file
-- name that you want to shrink.
@MaxMinutes = 10, -- Limit on time allowed to wrap log.
@NewSize = 10 -- in MB

-- Setup / initialize
DECLARE @OriginalSize int
SELECT @OriginalSize = size -- in 8K pages
FROM sysfiles
WHERE name = @LogicalFileName
SELECT 'Original Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName

CREATE TABLE DummyTrans
(DummyColumn char (8000) not null)

-- Wrap log and truncate it.
DECLARE @Counter INT,
@StartTime DATETIME,
@TruncLog VARCHAR(255)
SELECT @StartTime = GETDATE(),
@TruncLog = 'BACKUP LOG ['+ db_name() + '] WITH TRUNCATE_ONLY'
-- Try an initial shrink.
DBCC SHRINKFILE (@LogicalFileName, @NewSize)

EXEC (@TruncLog)

-- Wrap the log if necessary.
WHILE @MaxMinutes > DATEDIFF (mi, @StartTime, GETDATE()) -- time has not expired
AND @OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName) -- the log has not shrunk
AND (@OriginalSize * 8 /1024) > @NewSize -- The value passed in for new size is smaller than the current size.
BEGIN -- Outer loop.
SELECT @Counter = 0
WHILE ((@Counter < @OriginalSize / 16) AND (@Counter < 50000))
BEGIN -- update
INSERT DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.
DELETE DummyTrans
SELECT @Counter = @Counter + 1
END -- update
EXEC (@TruncLog) -- See if a trunc of the log shrinks it.
END -- outer loop
SELECT 'Final Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),size) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName
DROP TABLE DummyTrans
PRINT '*** Perform a full database backup ***'
SET NOCOUNT OFF




======================================
Ask to your self before u ask someone
Go to Top of Page

chadmat
The Chadinator

1974 Posts

Posted - 2002-04-05 : 04:03:08
This is unnecessary in SQL 2000. It was necessary in SQL 7.0. but 2K does this in the background for you. Truncating the log, and running DBCC Shrinkfile is sufficient.

-Chad

Go to Top of Page
   

- Advertisement -