Author |
Topic |
poser
Posting Yak Master
124 Posts |
Posted - 2011-11-08 : 09:30:09
|
I have a huge amount of data that get inserted nightly and I am having problems with my transaction logs.I had the recovery model set to full but the logs grew way too big to fast (disk space).I put the database in simple mode and turned off the auto grow and I got the error it was full.I put it in simple mode and turned on auto grow and it is growning huge again.How can I mangage this more efficiently....Thanks for all help,r/p |
|
vikki.seth
Yak Posting Veteran
66 Posts |
Posted - 2011-11-08 : 09:35:18
|
Check for the possiblity of using bulk insert. Put your database in bulk logged recovery mode before the insert process and change it back to full after the import completes.If bulk insert/bcp is not possible, make changes to the insert procedure to insert records in batches rather than entire records in one transaction. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2011-11-08 : 12:59:37
|
Switching recovery models only determines what happens after the transaction commits or rolls back. It does not affect the tlog actual usage. Have you tried batching the inserts? How long does the process take? And how often are you backing up the transaction log?I never recommend bulk logged recovery model as you have limited point in time recovery options during the bulk process. Doesn't make sense for critical data that requires all points of recovery.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
|
|
poser
Posting Yak Master
124 Posts |
Posted - 2011-11-08 : 14:47:14
|
This job takes 3 hours to run. Programmers are not interested in rewriting code :(I'm just trying to find a way to manage this so I don't have to do it all manually. Of course the programmers are okay with that LOLNo matter how many times I tell them truncating\shrinking log file is not a daily option.I have to work with what I got...Thanks, r/p |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Kristen
Test
22859 Posts |
Posted - 2011-11-08 : 15:18:15
|
quote: Originally posted by poser Programmers are not interested in rewriting code :(
They are here, in fact its a way-of-life.When we see ways to refactor with the benefit of improved knowledge we do. There needs to be a perceived benefit - e.g. better performance, or reliability - but a key part is that the act of "improving" the code means that future code, written by the team, will have the benefit of the knowledgebase from the earlier refractoring, which will mean that, when reused, that knowledge will result in improved code on the next job, less downtime, more performance, and thus less time spend on debugging and tuning.Slightly less tangible is the benefit of reduce customer dissatisfaction , defection, and negative PR. |
|
|
vikki.seth
Yak Posting Veteran
66 Posts |
Posted - 2011-11-09 : 03:40:49
|
quote: Originally posted by tkizer Switching recovery models only determines what happens after the transaction commits or rolls back. It does not affect the tlog actual usage.
I believe actual tlog usage does vary in full vs bulk logged recovery modes. If we are running a minimally logged operation such as bcp, the usage of tlog will be minimal (all pages changed due to minimally logged operation are marked in the BCM page).Tlog backups will definately have all changed extents due to bulk operation, so the tlog backup size will be similar.Pls correct me if I am wrong. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Sachin.Nand
2937 Posts |
Posted - 2011-11-09 : 12:00:24
|
Can anyone prove that with an example?I tried with both INSERT INTO and Bulk Insert with both the recovery models but the log file size are same for both of them.PBUH |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Sachin.Nand
2937 Posts |
Posted - 2011-11-09 : 12:57:00
|
So that's what I am trying to understand here.If anything is going to be minimally logged it means the the size should not increase.In my test I had kept the log file size to 512 KB with 10% growth.So if anything is going to be minimally logged then no of VLF's created should be less than what is fully logged(full recovery)and subsequently not increase the log size where it becomes same in size when the same actions are performed with full recovery model.PBUH |
|
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2011-11-09 : 13:17:21
|
VLFs are created only when the log is created or grown, not when operations are logged.Minimally logged doesn't mean the log won't grow. It means that there is less logged than in full recovery only. There's a whole bunch of requirements for operations to be minimally logged.--Gail ShawSQL Server MVP |
|
|
Sachin.Nand
2937 Posts |
Posted - 2011-11-09 : 13:51:53
|
So what exactly is meant by "minimally logged" and what do you mean by "less logged" ?If log size is not going to be less and VLF's are going to be the same then why should one change to Bulk_Logged for certain operations with the disadvantage of not having point in time recovery.PBUH |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Sachin.Nand
2937 Posts |
Posted - 2011-11-09 : 14:03:28
|
So whats the need for Bulk_logged recovery model in the first place if it should not be used for operations where by definition it is supposed to offer much greater advantages.Also can anyone please explain what is "less logging" ? In my test I really could not find ways to understand that Bulk_logged indeed did "less logging ".PBUH |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2011-11-09 : 15:01:00
|
quote: Originally posted by Sachin.Nand So what exactly is meant by "minimally logged" and what do you mean by "less logged" ?
There's a very good article in Books Online....quote: If log size is not going to be less and VLF's are going to be the same then why should one change to Bulk_Logged for certain operations with the disadvantage of not having point in time recovery.
The log size and the VLFs are only affected by CREATE DATABASE, ALTER DATABASE, autogrow and shrink. An operation writing to the log won't generate VLFs unless it forces an autogrow.A minimally logged operation will log less than a fully logged operation. Less meaning exactly what the word usually means - a smaller amount.--Gail ShawSQL Server MVP |
|
|
Sachin.Nand
2937 Posts |
Posted - 2011-11-09 : 15:03:00
|
Here is the code.try changing the recovery model to full and bulk_logged.you will see the log file size being same for both the models.Not sure what I am missing here...USE masterIF DB_ID('BulkLogged') IS NOT NULL DROP DATABASE BulkLoggedGOCREATE DATABASE BulkLoggedONPRIMARY(NAME = N'BulkLogged', FILENAME = N'C:\BulkLogged.mdf' , SIZE = 5MB , FILEGROWTH = 10%)LOG ON(NAME = N'BulkLogged_log', FILENAME = N'C:\BulkLogged_log.ldf' , SIZE = 512kb , FILEGROWTH = 10%)GOALTER DATABASE BulkLogged SET RECOVERY bulk_loggedUSE BulkLoggedGOCREATE TABLE T1(c1 int identity primary key, c2 char(3)) INSERT INTO T1 SELECT TOP 100000 'aaa' FROM syscolumns a CROSS JOIN syscolumns bdbcc sqlperf(logspace)dbcc loginfoSELECT * INTO T2 FROM T1 dbcc loginfodbcc sqlperf(logspace) PBUH |
|
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2011-11-09 : 15:06:52
|
Without taking a backup both are still in pseudo-simple recovery model.--Gail ShawSQL Server MVP |
|
|
Sachin.Nand
2937 Posts |
Posted - 2011-11-09 : 15:12:59
|
quote: Originally posted by GilaMonster Without taking a backup both are still in pseudo-simple recovery model.--Gail ShawSQL Server MVP
Ok.Now that's interesting...So when should I take a backup before the BULK operation or after and what kind ?PBUH |
|
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2011-11-09 : 15:14:50
|
I made the table a lot wider (it was so small there would be little visible effect) and added full and log backupsUSE masterIF DB_ID('BulkLogged') IS NOT NULL DROP DATABASE BulkLoggedGOCREATE DATABASE BulkLoggedGOALTER DATABASE BulkLogged SET RECOVERY bulk_loggedBACKUP DATABASE BulkLogged TO DISK = 'D:\Develop\Databases\Backups\BL.bak'USE BulkLoggedGOCREATE TABLE T1(c1 int identity primary key, c2 char(3), filler CHAR(200)) INSERT INTO T1 SELECT TOP 100000 'aaa', '' FROM syscolumns a CROSS JOIN syscolumns bBACKUP LOG BulkLogged TO DISK = 'D:\Develop\Databases\Backups\BL.trn'SELECT * INTO T2 FROM T1 dbcc loginfodbcc sqlperf(logspace)GOUSE masterIF DB_ID('FullLogged') IS NOT NULL DROP DATABASE FullLoggedGOCREATE DATABASE FullLoggedGOALTER DATABASE FullLogged SET RECOVERY FULLBACKUP DATABASE FullLogged TO DISK = 'D:\Develop\Databases\Backups\FL.bak'USE FullLoggedGOCREATE TABLE T1(c1 int identity primary key, c2 char(3), filler CHAR(200)) INSERT INTO T1 SELECT TOP 100000 'aaa', '' FROM syscolumns a CROSS JOIN syscolumns bBACKUP LOG FullLogged TO DISK = 'D:\Develop\Databases\Backups\FL.trn'SELECT * INTO T2 FROM T1 dbcc loginfodbcc sqlperf(logspace) -- BulkLogged 51.99219 15.26014 0-- FullLogged 51.99219 54.63467 0Log space used in full recovery - 54% of 52MB so 28MBLog space used in bulk-logged - 15% of 52MB so 7.8MB--Gail ShawSQL Server MVP |
|
|
Next Page
|