Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
joshpaddy
Starting Member
2 Posts |
Posted - 2011-09-17 : 01:47:11
|
Dear All,This is regarding the size of my DB, which affects the normal back-up time taken from the sever to my desktop.DB Size========mdf - 5.2 GBldf - 611 MBFollowing things I do not want to do,* I do not want to shrink the DB, causing indexing / fragmentation issues.* I do not want to truncate data, as older data are also in use at the user-endPlease suggest me,1. How do manage the ldf (Tranlog) size. Please tell me detail as i am a beginner :)2. Will it be ok to create .ndf file, & host it on a separate physical drive. Please elaborate3. How to view / change the type of log recovery (Simple, Bulk or Full) in ldf file. Please advice in detailApart from all these, i will be very much thankful to you, If you could provide me with your best optimum solution for the above case.Cheers,JP |
|
RickD
Slow But Sure Yak Herding Master
3608 Posts |
Posted - 2011-09-19 : 04:49:10
|
What exactly is the problem? This is a tiny database..Why are you taking a backup to your desktop? Backups should be made to a local server drive and then compressed and written to tape or an offsite server.If you are looking to optimise your strategy, could I suggest paying a consultant? |
|
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2011-09-19 : 05:08:10
|
Please read through this - [url]http://www.sqlservercentral.com/articles/64582/[/url]Rick's right, that database is absolutely tiny. Far too small to need multiple data files or anything like that.--Gail ShawSQL Server MVP |
|
|
NeilG
Aged Yak Warrior
530 Posts |
Posted - 2011-09-19 : 05:46:48
|
not really sure what you wanting people to suggest, but personally I'd also not backup the database to your desktop but to a shared location on the network preferable not on the actual server itself. |
|
|
Kristen
Test
22859 Posts |
Posted - 2011-09-19 : 05:56:06
|
We never backup to network, only ever to the server's local disk - and then, from there, copy to Tape and/or network location.This is to avoid the risk that the network location is unreachable and backups fail. We prefer to reduce that risk by having the backups local.When we copy to network location we use very low stress copying, otherwise copying across network can cause timeouts on the database because of contention for network bandwidth.Depends on a number of things of course! we have huge backups, and huge numbers of users trying to get to the database ... the design is intended to give the users priority, without jeopardising backup safety. |
|
|
Kristen
Test
22859 Posts |
Posted - 2011-09-19 : 05:58:41
|
"How do manage the ldf (Tranlog) size"ASsusming you are using Full Recovery Model then increasing the frequency of TLog backups will reduce the risk that the LDF file is extended.Check your Tlog backups and see which ones are largest. If that happens at the same time of day (or same day of week) then its probably some housekeeping task that is causing the most transaction "stress". We increase our TLog backups to every 2 minutes during housekeeping (index rebuilds etc.) to reduce the chance of the LDF file being extended. |
|
|
joshpaddy
Starting Member
2 Posts |
Posted - 2011-09-19 : 09:35:35
|
@ All - Thank you so much for your swift response.Let me clarify in detail,* As mentioned correctly by you all, I am taking a backup in the local server disk, and then instead of transferring it to a Tape / Any External device, i am storing in my desktop through RDC (Which consumes alot of time!!!!)From your responses, i am clear that my DB size is very small, so can you just tell me,* What is the max size of the DB till which i can be relaxed???* Automating backup in SQL Server 2005. Please throw alot of light on this topic.Cheers,JP |
|
|
|
|
|
|
|