| Author |
Topic |
|
admin001
Posting Yak Master
166 Posts |
Posted - 2004-06-23 : 23:13:06
|
| Hi ,We have a large data warehouse database , approx 200 GB which is being backed up to a remote SAN daily . The backup peformed is full database backup. There is no incremental backup configured. We are attaching a separate SAN locally on the server which will be dedicated to this database to be backed up daily rather than backing up remotely .The problem is that there are five filegroups for this database , all on a single drive including the transaction log file . We now want to split up the database and log files on different drives on the local SAN .Also we saw that the transaction log is growing and want to work out a strategy to reduce it or keep it at minimum through scheduled scripts . I came across two options from BOL: 1. truncate log with checkpoint 2. truncate log with nolog Since I am unaware of the correct usage of these two options , would like your views in suggesting the suitable option to maintain my tlog and help from filling up the disks . Will I be able to recover my latest transactions using these two options in case of server problems ? Any advice / help will be highly appreciated .Thanks once again. |
|
|
ValterBorges
Master Smack Fu Yak Hacker
1429 Posts |
Posted - 2004-06-23 : 23:36:31
|
| You should consider a product called SQL Lite it will compress your backup 75%. I've used it sucessessfuly in a 2TB environment.1. A few things you could do:1. Set up filegroups for the larger fact tables and back those up using sql lite locally and then push it out to the remote using a guaranteed delivery mechanism.2. Set up changed data capture (could be easily done using attunity cdc products). Take a full sql lite backup weekly backup the cdc capture tables on their own filegroup using sql lite daily and in the event of a failure you'd restore the full and run the cdc records through the datawarehouse etl process.3. Similar to above. If you're currently loading your warehouse from source system files just back those up for a week and make sure your etl process is tunned and is automated enough so that it can easily recover from a restore of a full sql lite backup and from running 7 of history files. |
 |
|
|
derrickleggett
Pointy Haired Yak DBA
4184 Posts |
Posted - 2004-06-23 : 23:37:58
|
| The only way you're going to be really utilize the transaction logs for recovery is to do log backups throughout the day. If you are not doing this, you need to set your recovery mode to simple, which makes your question a moot point. If you do transaction log backups, the log files will shrink after you do this, which makes your question a moot point. Are you planning on doing log backups? If not, how are you covering yourself throughout the day? Are you going to use some kind of a snapview or disk replication tool for this? If you are not doing any of this, then you are exposing yourself to several hours of dataloss in the case of failure. Dividing the files among seperate drives or LUNs as the appropriate SAN speak would say requires nothing more then setting up more LUNs on the SAN, registering them as drives on the hosts, and moving the files. To move database file, look at the following article:http://support.microsoft.com/default.aspx?scid=kb;en-us;224071MeanOldDBAderrickleggett@hotmail.comWhen life gives you a lemon, fire the DBA. |
 |
|
|
admin001
Posting Yak Master
166 Posts |
Posted - 2004-06-23 : 23:57:25
|
| Thanks a lot Valter and Derrick . Actually the new SAN which will be installed has a mirroring utility .There will always be two master drives suppose A and B and two slave drives C and D . A drive will be mapped to C , while B is mapped to D . The slave will always be in sync with master drives and it will take a snapshot of the database once the backup is complete . In case of any problems on the master drives , the slave drives will become master automatically with drives still A and B . Then in that case , if I start taking backup of transaction logs daily , will it help in keeping the logs from growing ? as I believe tlog backup will truncate the inactive entries from the log . The current full backup strategy does not truncate the transaction log , i think . There is no backup configured for transaction logs and we want to implement this soln asap. Thanks once again. |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2004-06-24 : 00:13:25
|
| You will probably want to back up the tr log more often than daily.I'm a bit surprised with a system like this you don't have anyone who knows sql server. You might want to get someone to at least review your system and explain a few basic principles before you go too much further.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
derrickleggett
Pointy Haired Yak DBA
4184 Posts |
Posted - 2004-06-24 : 00:22:47
|
| What Nigel is saying is very true. You can have all this fancy hardware and software, then when you go down find out you are really screwed. What type of SAN is this? Does the mirroring software have a hook into SQL Server to insure read/write consistency? What type of recovery are you looking at? Does it provide you the ability to rollback the production as well as failover to the mirror? All of these things are available and should be considered in a SAN environment. On your transaction log backups, you either need to do incrementals throughout the day or set the recovery mode to simple. There's not really too many other viable solutions for you on that.MeanOldDBAderrickleggett@hotmail.comWhen life gives you a lemon, fire the DBA. |
 |
|
|
admin001
Posting Yak Master
166 Posts |
Posted - 2004-06-24 : 00:44:53
|
| Yes , true even I agree with you all about this fancy hardware and tools around . We have planned a test / stag. environment to be built and would take all situations / risks into considerations while testing this new SAN . As of now , it ensures data read / write consistency and well as failover to the mirror , but will be validated only after it is tested and proven . Thanks to you all for your valuable comments / suggestions. |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2004-06-24 : 00:58:17
|
| My concern is that you make statements like that but don't know the basics of the technology you are working with.Trial and error isn't a good way of designing a system.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
derrickleggett
Pointy Haired Yak DBA
4184 Posts |
Posted - 2004-06-24 : 00:59:56
|
| And does it have a rollback capability. Many SANs have software that allow you to take snapshots of the data every 15 minutes or so. These can be used to rollback the production instance if a failure or corruption occurs. This, combined with the mirroring, provides extremely high availablity. If you are going to use this, there's a good possibility you will just leave your recovery mode on simple. It depends on how your vendor handles it. What kind of a SAN is it?MeanOldDBAderrickleggett@hotmail.comWhen life gives you a lemon, fire the DBA. |
 |
|
|
admin001
Posting Yak Master
166 Posts |
Posted - 2004-06-24 : 01:12:42
|
| This is a Hitachi SAN . |
 |
|
|
derrickleggett
Pointy Haired Yak DBA
4184 Posts |
Posted - 2004-06-24 : 01:21:25
|
| Ok. I've never used Hitachi. Only EMC, HP, and IBM. You might want to ask them what their equivalent to ERM and SnapView is though.MeanOldDBAderrickleggett@hotmail.comWhen life gives you a lemon, fire the DBA. |
 |
|
|
admin001
Posting Yak Master
166 Posts |
Posted - 2004-06-24 : 01:27:05
|
| Sure . Thanks . |
 |
|
|
|