Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
naveenjv
Starting Member
9 Posts |
Posted - 2015-04-14 : 02:19:20
|
Hi Experts,I have a database with more than 600+ tables. Some of the table are very big ranging from 50 to 60 GB in production. Tables are related to each other using foreign key (no ON DELETE CASCADE). I have been asked to remove data from all these tables where PCode=10. Most of the tables have multiple indexes on it. I know, inner join delete operations, are going to be extremely resource intensive especially on our bigger tables. Guys, please do let me know what is the best practical approach (as per industry standards) to remove these data from all the table at the least possible time.Thanks in advance.Regards,Naveen |
|
Kristen
Test
22859 Posts |
Posted - 2015-04-14 : 04:28:19
|
We have huge deletes that run overnight. We do these in batches of a reasonable size (in a loop) with a WAITFOR of a second or two so that other processes can run. We also increase the TLog Backup frequency (to every minute) so as not to stress/extend the log file during this housekeeping task.IF PCode=10 is MOST of the data in a table you COULD:Lock table/systemINSERT all rows where PCode<>10 INTO TemporaryTableDROP FKeysDROP Original TableRENAME TemporaryTable to Original Table namere CREATE PKey, FKeys, INDEXES, Triggers, Constraints, etc. |
|
|
|
|
|