Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Shrink All Logs

Author  Topic 

MediaPirate
Yak Posting Veteran

52 Posts

Posted - 2004-01-21 : 18:04:11
Does anyone have a query which will shrink all the logs for all the databases on the server as SA? I'm using LightSpeed and we don't backup our transaction logs, so if someones logs are full it will fail on that database leaving the rest without being backed up. I moved all the loging to simple, I'm familiar with SHRINKFILE , but is there a way to select all databases?

Thanks, Jim

MediaPirate
Yak Posting Veteran

52 Posts

Posted - 2004-01-21 : 18:05:24
Sorry Left out the specs.

SQL 2000 /w SP3a
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-01-21 : 18:29:44
Shrinking of files or databases should not be done automatically in production.

LightSpeed allows for transaction log backups, so why aren't you doing it? What is the database recovery model set to for these databases? If FULL, you have to backup the transaction log regardless of what product you use. If SIMPLE, then you don't need to.

No, there is not a way to select all databases. I mean there is a way, but it would involve lots of code.

Tara
Go to Top of Page

MediaPirate
Yak Posting Veteran

52 Posts

Posted - 2004-01-21 : 18:41:23
We select SIMPLE log recovery for all of our databases. We run a nightly full backup, and in 7 years we've never required logs as a part of our back schema. The servers have 200+ databases on them, I'd hate to create a job for each database.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-01-21 : 18:50:56
The point is that you shouldn't create a job for this. A database shouldn't be shrunk just because the log file has grown, even if it is considered large. You should only shrink if you know that it won't get to that size again. Performance is impacted when you shrink it and then the database needs it again.

Even if you are going to run DBCC SHRINKFILE for these 200+ databases, they can be put into one job. A job doesn't need to consist of one item of work to be performed. You can have multiple job steps and also an individual job step can have multiple things to do. If the problem is how to write out the commands for each of the databases, then take a look at this:

SELECT 'DBCC DBREINDEX([' + u.name + '.' + o.name + '], '''', 80)'
FROM sysobjects o
INNER JOIN sysusers u ON o.uid = u.uid
WHERE o.name <> 'dtproperties' AND o.type = 'u'
ORDER BY o.name

The above query writes out the DBCC DBREINDEX command for each of the tables. It can easily be modified for DBCC SHRINKFILE (hint use sysdatabases to get the database name list, WHERE dbid > 6).

Also, requiring a log has to do with the amount of data the company is willing to lose in the case of a failure. It doesn't have to do with how long the system has been available. Not requiring a log means that the company can afford to lose 24 hours worth of data (considering that full backups are run once a night). My company can not take that risk on most of our systems, especially the mission critical ones. The ones where we can take the risk are the ones where we can get the data from an alternate source.

Tara
Go to Top of Page

MediaPirate
Yak Posting Veteran

52 Posts

Posted - 2004-01-21 : 20:08:48
Ok,

Here's another idea, I used to have a script which would automatically change all the databases recovery method's to FULL, SIMPLE or BULK-LOAD.. by chance does anyone have it?

Thanks for your help Tara!
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-01-22 : 12:21:47
Why would you want to change it though? Changing the recovery model isn't going to solve your problem. The problem that you are having is that you don't have enough free disk space to support the 200+ databases growth of the transaction log. The transaction log grows when things like DBCC DBREINDEX are run. Or when a large transaction is running, such as a huge DELETE. These are/can be normal activities for a database.

The script would be the same as the one that I mentioned except it would say ALTER DATABASE and would use sysdatabases.

Tara
Go to Top of Page

MediaPirate
Yak Posting Veteran

52 Posts

Posted - 2004-01-22 : 16:51:54
Tara

We have the disk space to backup the transaction logs, but we don't want them. It's realy not my call, but they've never backed up the transaction logs since they've supported SQL. The problem is our customers don't purge their logs, and we aren't going to create individual jobs to purge over 2000 databases for them. When we create the databases for the customers we set the recovery method to simple but since we also provide them dbo access they change it to full. We have never in all the years we've been in operation required restoring from a log file. I can run a script monthly to purge the logs that are getting full, but I would have to do this less frequently if they were set to Simple. I'm not familiar with SQL scripting, I'm not a SQL Admin .. my repsonsiblity is to ensure our lightspeed backups complete every night.

Thank you, Jim
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-01-22 : 17:02:02
What do you mean that the customers don't purge their logs? Logs aren't purged. With SIMPLE recovery model, there is nothing to do. For the ones that get set to SIMPLE by the customer, run this:

BACKUP LOG DBName WITH NO_LOG

It says to truncate the transaction log and DON'T save the information to a file.

Tara
Go to Top of Page
   

- Advertisement -