Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
AskSQLTeam
Ask SQLTeam Question
0 Posts |
Posted - 2002-11-07 : 08:05:30
|
| Gabriel writes "Here's my situation (SQL2K)We have a testing database we're using to convert large amounts of data from 1 system to another. We might process 5-6 million records, but don't care about being able to recover point-in-time.I set recovery mode to simple, do a full backup every night. I keep getting large transaction logs. I manually run Shrink Database when I realize the logs are bigWhat can I do to prevent the logs from getting big in the first place ?? Can I prevent logging from happening ?I keep reading various books and BOL, but I guess I don't quite "get it" yet ...... Any plain spoken, detailed suggestions would be very appreciated .... thanks in advance" |
|
|
Page47
Master Smack Fu Yak Hacker
2878 Posts |
Posted - 2002-11-07 : 09:07:03
|
| SQL Server is a RDBMS and as such persists transactional integrity. If you batch fails in the middle, everything is rolled back. SQL Server uses a transaction log to accomplish this, regardless of the recovery model you have selected.Now, there are some special operations that can be not logged, depending on database settings (specifically the recover model must be simple or bulk-logged, the target table is not being replicated, the target table does not have any triggers, etc..). The special operations are used to 'bulk copy' data from one place to another. Take a look at the bcp utility and bulk insert.If all you are doing is moving data from one server to another, you may be able to take advantage of this. However, any DML you execute will be logged ... you can't turn that off.Jay White{0} |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2002-11-07 : 09:27:41
|
| Simple recovery model will truncate log on checkpoint so that's probably keeping the tr log as small as possible.It will only truncate to the earliest open transaction though so if you have a long running transaction all updates will be adding to the log size until it is closed.A long running transaction may be because you are updating a lot of records in one go (a bad thing for database stability) or just that you have manually opened a transaction and left it.The second is easy - don't do it.The first - split the updates into batches which can be committed.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
|
|
|
|
|