Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Backup Solution for critical data on Standard Ed.

Author  Topic 

tribune
Posting Yak Master

105 Posts

Posted - 2004-03-02 : 19:40:25
For whatever reason, the funding isn't available for Enterprise edition (which supports clustering/failover/log shipping/etc) so I'm forced to do the best I can with Standard edition.

Basically, we're willing to lose up to 1-2 minutes of transactions upon a failure...which will have to be recreated by hand by the billing department.

Right now I'm considering having a sql job backup the transaction log every minute and writing a manual application to scan for new log backups every 60 seconds and ship them off-site. The app would then compress daily snapshots of the logs and generate a restore script...So the total possible lag between these two processes could be 120 seconds - 2 minutes not including the time required for the custom application to perform the off-site shipping (however this shouldnt take more than a few seconds because the log files would be pretty small).

Will sql experience performance issuess with such a frequent schedule for transaction log backups?

Can anyone suggest alternative solutions?


tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-03-02 : 19:50:27
1-2 minutes does not give you a lot of leeway. Running backups that often could cause performance problems. I wouldn't assume that a transaction log could be restored on the secondary server in just 2 minutes. What happens when DBCC DBREINDEX run and causes the transaction log backup file to be large? How big of a database are we talking about? We've got one that is around 5GB. When DBCC DBREINDEX runs on one of the tables, the subsequent transaction log is just under 5GB. It takes a bit of time to get this file applied to the secondary serve at our disaster recovery site, especially since the file copy is slow. Will the secondary server exist at a disaster recovery site?

Tara
Go to Top of Page

tribune
Posting Yak Master

105 Posts

Posted - 2004-03-02 : 20:08:06
tduggan,

The database is fairly complex and the solution would need to support up to 3-4GB in size.

If you've got a 5GB database, why would DBCC DBREINDEX create a 5GB log file?

Is DBCC DBREINDEX run automatically by sql? I didnt find much documentation in BOL about it.

For right now, "off-site" will be a different physical server than the database server, but inside the local area network.

Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2004-03-02 : 23:04:41
SQL Server logs all the modifications made to indexes during DBREINDEX, and for clustered indexes this also means the data itself. It's no great trick to create a log file that exceeds the size of the database, just run UPDATE statements all day long and never backup the log.

I agree with Tara, backing up the log every minute or two will lessen peformance and not give you any added benefit. 5 minutes is probably still too often, and frankly anyone who feels they can't lose 10 minutes of work is hopelessly paranoid or self-indulgent, or both. Even then, if an actual disaster did occur you're not gonna be able to switch over a non-clustered environment in 10 minutes anyway. Let people take an extra coffee break, it beats slowing them down constantly throughout the day with excessive backups and network traffic.
Go to Top of Page

tribune
Posting Yak Master

105 Posts

Posted - 2004-03-02 : 23:13:45
LOL rob, the reason why the data is critical is because it houses financial transactions for a website. I know when you think "financial" you're going to say that its rediculous to run such a mission critical database on anything but a cluster, but I've got to work with what I've got to work with :)

I'm implementing a transaction safeguard on the webserver so we can work backwards from the disaster, but if we loose 10 minutes of data versus 1 minute of data, our workload will increase tremendously.

Any ideas would be very much appreciated.
Go to Top of Page

MuadDBA

628 Posts

Posted - 2004-03-03 : 08:51:28
quote:
Originally posted by robvolk

SQL Server logs all the modifications made to indexes during DBREINDEX, and for clustered indexes this also means the data itself. It's no great trick to create a log file that exceeds the size of the database, just run UPDATE statements all day long and never backup the log.

I agree with Tara, backing up the log every minute or two will lessen peformance and not give you any added benefit. 5 minutes is probably still too often, and frankly anyone who feels they can't lose 10 minutes of work is hopelessly paranoid or self-indulgent, or both. Even then, if an actual disaster did occur you're not gonna be able to switch over a non-clustered environment in 10 minutes anyway. Let people take an extra coffee break, it beats slowing them down constantly throughout the day with excessive backups and network traffic.



While I can't agree about the 5 minute loss limit, it should be used as an argument for purchasing a more robust system. Explain the limitations of Standard Edition and how Enterprise edition has much more functionality in terms of redundancy. Then ask them how much losing a day's worth of work would cost them. That should be justification for purchasing the correct hardware right there.

Beyond that, log shipping can be implemented through SQL Server, even without Enterprise Edition, so you don't have to go through all the hassle to write your own application. The scripts to set it up should be online somewhere, just do some googling for them and you'll find them in no time.

I currently DO have a situation where the business required a "warm" standby server to be within 5 minutes of the prod server, so I have a log shipping plan set up on that server and its standby. The log shipping is disabled during reindexing and db shrinking, and resumes once those operatiosn are done.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-03-03 : 16:22:02
quote:
Originally posted by tribune

tduggan,

The database is fairly complex and the solution would need to support up to 3-4GB in size.

If you've got a 5GB database, why would DBCC DBREINDEX create a 5GB log file?

Is DBCC DBREINDEX run automatically by sql? I didnt find much documentation in BOL about it.

For right now, "off-site" will be a different physical server than the database server, but inside the local area network.





Rob explained why DBCC DBREINDEX will cause a large transaction log backup file.

Please tell me that you have DBCC DBREINDEX scheduled for your production databases. If not, you'd better start. It is very important for performance. Without it, your database is probably heavily fragmented.

We run DBCC DBREINDEX every night on each of the databases for all of the indexes except for the larger ones. For the larger ones, we reindex the small ones each night and then the large ones get reindexed a couple of times per week. For instance:

Day 1:

Index1, Index2, Index3, plus all small ones get reindexed

Day 2:

Index 4, Index 5, plus all small ones get reindexed

Day 3:

Index1, Index2, Index3, plus all small ones get reindexed

...

Tara
Go to Top of Page

AjarnMark
SQL Slashing Gunting Master

3246 Posts

Posted - 2004-03-04 : 15:07:53
Wait a minute! Or maybe 2. I think you've all overlooked something. (or I need another latte to wake up my brain). When you're doing transaction log backups, you can restore to a point-in-time. You don't have to do your backups every 2 minutes in order to only lose 2 minutes worth of data unless you have such a horrendous destruction that you cannot get at the "current" transaction log.

When a crash occurs, the first thing you need to do is perform a backup of the current transaction log to store everything from the previous t-log backup (say 15 minutes ago) and the point of failure. Then you begin your restore process including the final backup you did before you started the restore.

So, am I crazy here? Maybe I'll go get a latte anyway...

--------------------------------------------------------------
Find more words of wisdom at [url]http://weblogs.sqlteam.com/markc[/url]
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-03-04 : 15:12:01
Well, we've assumed worst case scenario here: that the current transaction log is not available. When doing this kind of planning, you have to assume worst case scenario.

But then again, will your backup server be located at a disaster recovery site?

Tara
Go to Top of Page

AjarnMark
SQL Slashing Gunting Master

3246 Posts

Posted - 2004-03-04 : 16:00:09
Tara, I agree you have to plan for the worst case scenario. I guess the point we've all been making is that if that worst case scenario happens, they're going to wish they had spent the money on a more robust failover system. But maybe tribune needs to approach management and explain the risks such as: a) backups every 2 minutes will kill performance and might not even work due to necessary maintenance, b) backups every [larger increment] will allow recovery to point-in-time if the most recent t-log is accessible, c) if big enough crash occurs and t-log is not accessible, management needs to decide (now, not when it happens) between loosening the purse strings and loosening the 2-minute requirement.

In short, I don't think anyone here believes that backups every 2 minutes is feasible.

--------------------------------------------------------------
Find more words of wisdom at [url]http://weblogs.sqlteam.com/markc[/url]
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-03-04 : 16:09:16
Fully agree.

Tara
Go to Top of Page

MuadDBA

628 Posts

Posted - 2004-03-05 : 15:37:16
2 minutes is a little crazy, but as I said, I DO use 5-minute backups, and the impact to the system (with between 200 - 600 active users at any one time) is not such that it renders the system unusable. The backups themselves take seconds (usually 1 or 2) every 5 minutes.
Go to Top of Page

tribune
Posting Yak Master

105 Posts

Posted - 2004-03-05 : 17:59:25
joe out of curiosity how large are your TRN files? Did you implement your own solution for log backups (making sure to avoid log backups during long-running maintenance operations).

I am going implement the 5-10 minute transaction backup route. I guess the next step will be clustering since there's really no alternative for high availabilty in the 1-2 minute range. I've got a lot to learn on this
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-03-05 : 18:03:56
But don't forget how big the TRN file will be when you run DBCC DBREINDEX. So yes your TRN files will normally be small and be able to be quickly applied to the secondary server, but when DBCC DBREINDEX is run it could take a bit to get copied over. It's just something to be aware. But don't just not run DBCC DBREINDEX so that this doesn't happen. You need to run it.

Tara
Go to Top of Page

AjarnMark
SQL Slashing Gunting Master

3246 Posts

Posted - 2004-03-05 : 18:25:47
If you schedule jobs properly, you can leave a window where you're going to do the DBREINDEX and not do t-log backups every 5 minutes. Perhaps adding a t-log backup as a step in the job after the DBREINDEX. (I'm assuming, Tara, that you have your DBREINDEXES scheduled and you're not saying they need to be manually triggered/supervised).

--------------------------------------------------------------
Find more words of wisdom at [url]http://weblogs.sqlteam.com/markc[/url]
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-03-05 : 18:27:44
Yep, scheduled. But if they are not scheduled, at least do them manually here and there. It really is important for performance. So schedule it (even if it is with a maintenance plan)!

Tara
Go to Top of Page

derrickleggett
Pointy Haired Yak DBA

4184 Posts

Posted - 2004-03-05 : 18:41:08
Also, implementing clustering will not help you tribune. Clustering is to cover loss of server, not loss of disks. Clustering will allow you to immediately fail over to another server if your production server fails.

It will not prevent the loss of data if you lose your production arrays. The only thing that will help you then is log shipping, replication, or disk replication tools that people like EMC (with SnapView and ERM) offer you.

You probably want to implement it if you have a second backup server just setting there anyway. Just be aware of what you are really getting/not getting out of the deal.



MeanOldDBA
derrickleggett@hotmail.com

When life gives you a lemon, fire the DBA.
Go to Top of Page

AjarnMark
SQL Slashing Gunting Master

3246 Posts

Posted - 2004-03-05 : 19:15:57
quote:
Originally posted by tduggan

So schedule it (even if it is with a maintenance plan)!

!!!
Tara, you must feel more strongly about DBREINDEX than ANYTHING else. I would never have thought I'd see the day that you would advocate using a maintenance plan.

--------------------------------------------------------------
Find more words of wisdom at [url]http://weblogs.sqlteam.com/markc[/url]
Go to Top of Page

tribune
Posting Yak Master

105 Posts

Posted - 2004-03-05 : 19:36:55
This is probably stupid, but is there a way to store the database files on a separate array such as one located on a network attatched storage device? Its cost prohibitive right now to purchase a $50,000 Compaq EVA SAN, but locating the database files on a seperate physical computer might prevent me from not being able to access the data if a hardware component inside the database server dies. But then again, if a component on the NAS device dies I'm then I'll have the same delima. Do all enterprises use only SAN solutions?
Go to Top of Page

tribune
Posting Yak Master

105 Posts

Posted - 2004-03-05 : 19:39:13
Obviously my main fear here is losing transactions. It would be much worse to have the shared disk array fail and not be able to recover the transactions than it would for a motherboard to on a computer than could be replaced.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-03-06 : 23:17:15
No you can not store your database files on another computer. Even if it were allowed, this would not be a recommended solution.

Tara
Go to Top of Page
    Next Page

- Advertisement -