Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Deleting large amounts of Data Efficiently

Author  Topic 

AskSQLTeam
Ask SQLTeam Question

0 Posts

Posted - 2004-06-01 : 07:33:09
Pete writes "I'm not sure if there is any way to speed up this process. What I have is a large database table, 6bg in size, 32 million records.

I am try to clean it up to improve overall performance, the backup job takes 6-12 hours. Any way, the delete is predicated on a date time field and the current sql statement is

Delete from calldetain where calltime < '20030810'

I have about 100K to 150K records per day. This still takes 10-12 minuets to run and of course causes hate and discontent among the users as the locks causes some inserts to the table to time out. Other than running a day at a time and a month at a time late at night, is there any way to reduce the lock contention or speed up the process?

I have an index on the column calltime and rebuild it nightly. Would writing t-sql block that deleted and committed in smaller blocks via a cursor improve the operations? I believe it is the log file that is slowing me down.

The delete process seems to average 225 records per second on the daily delete, taking 10-15 minuets to complete. Machine is a dual 2.4 Zeon running raid1.

Any help is appreciated. if I am overlooking the obvious it is because I am growing dumber every passing day, just ask x-wife # 2 or my boss."

derrickleggett
Pointy Haired Yak DBA

4184 Posts

Posted - 2004-06-01 : 08:31:27
via a cursor

No. They are inefficient, slow, and processor intensive. If you are going to do it in batches, find a set based way to limit how many records you delete at a time.

This is probably caused by how many indexes you have on the table. If you can run this after hours, you will probably find it's faster to run by dropping all the indexes except calltime, running the delete, then adding the indexes back on.


MeanOldDBA
derrickleggett@hotmail.com

When life gives you a lemon, fire the DBA.
Go to Top of Page

mohdowais
Sheikh of Yak Knowledge

1456 Posts

Posted - 2004-06-01 : 09:09:36
You can do it in batches by using SET ROWCOUNT, you can control how many rows are actually deleted with every delete statement. Also if you use a sufficiently small number, SQL Server should not escalate the lock beyond a page or two. Example:


SET ROWCOUNT 1000
DECLARE @RowsAffected INT
SET @RowsAffected = 1

WHILE (@RowsAffected > 0)
BEGIN
DELETE FROM MyTable WHERE CallTime < '20040101'
SET @RowsAffected = @@ROWCOUNT
END

SET ROWCOUNT 0

Derrick makes a good suggestion about dropping other indexes, that should substantially speed up the deletes as well.

OS
Go to Top of Page
   

- Advertisement -