Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
rubs_65
Posting Yak Master
144 Posts |
Posted - 2005-02-02 : 19:01:30
|
| Hi,In our application we do bulk insert 20000 rows into staging database (bulk/logged recovery model) and then insert into production database from this staging database. It seems like SQL Server is taking 20 sec to do the insert into prod database. While if we delete records from prod database table and run insert again it takes 600ms. Cpu time is approx 250ms in both cases, writes in 1st case are 3500 and 250 in second and reads are negligible. What can be the possible reason for this behavior that insert into a tables as select * from 2nd table always runs slow first time as compare to if we run insert again it works fine(we ran dbcc dropcleanbuffers and checkpoint to make sure memory is clean for 2nd run)?Thanks--Harvinder |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2005-02-02 : 19:23:26
|
| Sounds like it might be page splitting. Do you have a clustered index? You might try removing it in favour of non-clustered or rebuilding it with free space.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
rubs_65
Posting Yak Master
144 Posts |
Posted - 2005-02-02 : 19:31:11
|
| We remove the clustered index and still the same issue with heap also...also following is trigger:Auto update stats is off ( we do it manually every night) No trigger No foreign key to/from this table No blocking at time of insertionStrange that the same rows insert slowly first time and then if delete from table and insert again runs very fast... |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2005-02-02 : 20:00:37
|
| Maybe SQL server is growing / shrinking your MDF or LDF? Do you have auto shrink / auto grow on?Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda> |
 |
|
|
rubs_65
Posting Yak Master
144 Posts |
Posted - 2005-02-03 : 09:42:28
|
| Autogrow option is off for data and log files and are properly sized for new data but still the same issue |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2005-02-03 : 10:05:27
|
| Try writing a scriptinsert from stagingdelete productioninsert from stagingRun it with show execution plan onThe first should take 20 secs the second 600msSee if there is any difference between the first and second (Are you just doing an insert?).Given that the first does 3500 reads and the second 250 there must be some difference.Is this physical reads? Could just be due to the data already being in memory for the second attempt - try flushing memory before the second attempt to check.dbcc dropcleanbuffers, dbcc freeproccache==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
rubs_65
Posting Yak Master
144 Posts |
Posted - 2005-02-03 : 10:33:58
|
| We tried all this steps. There is no difference in plan..we always clear the cache before running 2nd time.. |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2005-02-03 : 11:31:52
|
| Oops - forgot you said that on the original question.Guess we are looking for something that can cause 3500 writes instead of 250 but leave the rowcounts the same.All data is purged from memory before the exec.The database and log can't grow.No triggers, FK's or clustered indexThe only thing I can think of is page splitting but that would be odd given the indexes.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
|
|
|