Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
Ali.M.Habib
Yak Posting Veteran
54 Posts |
Posted - 2009-02-17 : 08:36:37
|
I want to delet 60126875 rowfrom huge table I used this query delete from tablename where dt='20080501' it always give error that the log file become too large and roll back the delete any advice |
|
visakh16
Very Important crosS Applying yaK Herder
52326 Posts |
Posted - 2009-02-17 : 08:45:12
|
change your recovery model to simple or use truncate table if this is all what yourtable contains. |
|
|
Ali.M.Habib
Yak Posting Veteran
54 Posts |
Posted - 2009-02-17 : 08:47:33
|
quote: Originally posted by visakh16 change your recovery model to simple or use truncate table if this is all what yourtable contains.
no it contai more than what I want to delete |
|
|
sakets_2000
Master Smack Fu Yak Hacker
1472 Posts |
Posted - 2009-02-17 : 09:04:57
|
You might also want to partition this table considering the huge size. |
|
|
mfemenel
Professor Frink
1421 Posts |
Posted - 2009-02-17 : 09:06:22
|
I don't suppose this table is partitioned by date. You could simply switch the rows to another table if it were. If you have to do it using delete make sure that you don't have any triggers that are going to fire on delete, this will really slow things down. If you can do this off hours, that of course would be a good idea. If not, consider doing it in chunks, maybe taking 1 million at a time which will keep your log file size small and won't impact users. You may have to experiment some to find what the right number to delete per chunk is, start small and work up. How many rows of data are you keeping? It that's a smaller number than what you're deleting you might consider moving your keepers to another table, truncating your junk and moving the keepers back in.Mike"oh, that monkey is going to pay" |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-17 : 09:17:22
|
Even in Simple recovery model that transaction will fill up log file as you are doing in 1 transaction.Automatic checkpoint doesn't occur while transaction is running or un-committed. As mentioned in other post,You should do in batches. |
|
|
|
|
|