Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
Analyzer
Posting Yak Master
115 Posts |
Posted - 2007-10-01 : 09:03:10
|
| Have a 1TB of heaped tables being used 24/7 and performance is degrading over time, and the vendors dynamic SQL won't include clustered indexes! (and they won't let me add them)I can reorganise the heap with an ALTER TABLE statement, add a column,clustered index, drop column etc. However this is intrusive and I would require an entire day to perform this piece-meal, and a blanket script for 2000+ tables would kill performance all together. Would a back-up, then remove the 1TB DB followed by a restore placing the data back onto disk unfragmented. (with LS this would only require a 3 hour down-time window) In theory it should work? |
|
|
Kristen
Test
22859 Posts |
Posted - 2007-10-01 : 11:21:01
|
| No, Restore will create a database of identical internal data structure.However, that might fix any physical fragmentation of the file. But you could use CONTIG.EXE from Sysinternals to do that.You could export the data using BCP (use native Format), truncate the table, and re-import. I expect it will take a while though!You need a clustered index for housekeeping to be easy and doable frequently.Kristen |
 |
|
|
Analyzer
Posting Yak Master
115 Posts |
Posted - 2007-10-02 : 05:53:34
|
| Thanks Kristen - looks like traditional methods for me then to unfragment the heaps or CONTIG.EXE, which I'm looking into now. It was worth a try to quantify whether the 'restore' approach could had been a quick win bearing in mind the time constraints I'm working with. |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2007-10-02 : 07:54:00
|
Just so nothing got lost in transaction its:traditional methods or and CONTIG.EXEI'm afraid ...Kristen |
 |
|
|
Analyzer
Posting Yak Master
115 Posts |
Posted - 2007-10-02 : 10:06:28
|
Understood |
 |
|
|
|
|
|