Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
Sarat
Constraint Violating Yak Guru
265 Posts |
Posted - 2003-04-10 : 13:34:58
|
| Hello,I would like to get an opinion on this document or draft I created for my company explaining the db estimated growth. It is about four pages so I know many may not care due to time but if you are interested at all, please let me know your thoughts/comments/suggestions! I wanted to attached a word doc but i don't know how to do that!Thanks, Sarat.******************************Disount Tire Co.SQL Server 2000 Capacity Planning for PeopleSoftObjective: The objective of this document is to provide an estimate of the space required by PeopleSoft database, which will help plan our storage layout and determine hardware requirements.Introduction: This document begins by stating the need for PeopleSoft database capacity planning, personnel responsible for allocating, managing and monitoring storage requirements, estimation methods, results and some recommendations for managing database size.Need for Capacity Planning:When we create a database structure, several physical files (data files, log files) and objects (table, index, trigger, view, key, constraint, stored procedure etc.) are created that occupy disk space. These objects and files grow in size as we create tables or new objects and manipulate data. With growing functionality in PeopleSoft and addition of new modules, it is essential to estimate the growth of database to ensure there are enough resources to support the PeopleSoft system.Personnel Responsible:System or Database administrators are responsible for allocating, managing and monitoring storage requirements for SQL Server.Estimation Methods:In order to estimate the space that our database will occupy, the following factors were considered:1. Size of system tables and projected growth.2. Amount of data in tables, including projected growth.3. Clustered and Non-Clustered Index size.4. Transaction Log file size.There are at least two methods to estimate database size1. A standard formula which is tedious but may be more accurate.a. Calculate number of bytes in a rowb. Determine number of rows in a data pagec. Divide number of rows in the table by number of rows in a data page.d. Multiply the above figure by 8KB.Note: Please ignore the ex, it is a hyperlink for my team.Ex: View calculation PS_JOB table below:\\SQL Server\DBAdmin\DiskSpaceTracking\CalculatingSzOfJob.doc2. Stored Procedure called ¡¥sp_spaceused¡¦, which is fairly simple but may not be accurate if the statistics (updateusage) are not updated on the tables.Ex: Exec sp_spaceused gives space used by entire database.Calculation Steps to derive Projected Growth for PeopleSoft in next 5 Years:1. Calculate estimated growth for production database in next 5 years.2. Calculate estimated growth for system database in next 5 years.3. Include any miscellaneous growth (bulk data loads) in next 5 years.4. Consider the space occupied by at least two backup files on the drive.5. Consider the space occupied by at least 12 hours of transaction log backup files.6. Add figures derived from steps 1, 2, 3, 4 and 5 to get the final estimate.Details of calculation:Assumptions:1. We have PeopleSoft version 8.2. No new modules (new tables, objects etc.) are added.3. Number of users accessing the database is 50.4. Weekly Indexing, Updating statistics and Defragmentation occurs.5. Regular backups are taken.6. There will at the most two backup files (midday and nightly) stored on the drive and 12 hours of transaction log backups.These figures should determine the size of P:\ drive only, which contains data files (MDF) for PeopleSoft Production and System databases and System log files (LDF). Used Sp_Spaceused procedure:Step 1. 1. Weekly increase in data file for HR8PRD = 60 MB2. Monthly increase in data file for HR8PRD = 4 * 60 = 240 MB3. Yearly increase in data file for HR8PRD = 12 * 240 = 2880 MB.Step 2. 1. Yearly increase in system databases (model, master, msdb) = 25 MB Step 3. 1. Miscellaneous (to accommodate any unwanted increase due to maintenance procedure/backup failures or errors in calculation) = 175 MBStep 4.Each backup file is at least data + index file size in a database.1. Yearly increase in backup file size (based on Step 1) = 2880 MBStep 5.Each transaction log file size varies depending upon the user interaction with the database. For 12 hours, the space required to store logs on the disk is 4.3 GB for the current database.1. Yearly increase (based on Step 1) for logs = 2880 MBStep 6 - Final Estimate:Yearly growth of databases on P:\ drive = a + b + c + d + e = 2880 + 25 + 175 + 2880 + 2880 = 8840 MB = 8.8GB = 9 GB„à Growth in one Year = 9 GB„à Growth in FIVE years = 45 GBQuestion: What will the projected growth if we add new PeopleSoft modules or if we increase number of users accessing the database?Answer: Ideally we should get an estimate from PeopleSoft about how many database objects are added and how much they require for every module (ex Time & Attendance) that is implemented.Ex: To get an estimate of how much space Administer Training Module requires, I added the data and index pages (by guessing the amount of data and transaction processing) used by various Training tables. This module occupies approximately 20 MB of space. Therefore, it is probably a good idea to keep aside 5 GB of space for upcoming growth and the number of users.L:\ drive store PeopleSoft database Log file. For an OLTP environment (write intensive), it is recommended to allocate 25% of data file (MDF). We have 16.9 GB of which 12.9 GB is being utilized. Hence, if the database grows by 3 GB in one year, the log file will be allocated 25% of 3 GB which is 0.75 GB.In five years, the log file will grow by 11.25 GBSummary:PeopleSoft database requires 50 GB of disk space on P:\ drive in addition to already existing 110 GB for the next five years.PeopleSoft Log file requires 5 GB of disk space on L:\ drive in addition to already existing 16.9 GB for the next five years.T:\ drive, which stores TempDB, seems to have enough space (33 GB) as of now. If we have tremendous read activity in future, we may need to allocate additional space to Temp DB to improve query performance.Recommendations:It is very essential to create maintenance plans, which will make efficient use of disk space.1. Customize standard Indexing plan to run every week especially for those tables, which are write intensive.2. Run Defragmentation job once in a week. If required customize the job to run for heavily written tables more frequently than the rest of the group.3. Specify FillFactors for those tables whose clustered indexes are heavily modified.4. Change Autogrow Increment property if required after analyzing if database is Write intensive or Read intensive.5. If performance is poor, restores are longer or to prevent unexpected table growth, consider creating one file for each physical disk or change the default filegroup.6. Use PerfMon tool and Profiler to analyze the tables, which are the most used.*****End |
|
|
AndrewMurphy
Master Smack Fu Yak Hacker
2916 Posts |
Posted - 2003-04-11 : 08:54:35
|
| On a quick scan....I don't see why you infer that the Log file needs to grow at the same rate of the datebase....Log files exist to allow recovery from database failures (and can be used in conjuctione with backups) and shouldn't grow that much over time....Look at the following scenario...Monday, Tuesday, Wednesday, Thursday - no backup...loads of inserts/updates/deletesFriday - Full Backup database, then Purge log back to emptyRepeat on a weekly basis....(it's not typical or wise to operate this way...it's just an example)The log will start off small/empty and grow during the week tracking all database changes....until the next full backup. Then after the purge, it can be reset back to a small size and left to grow until the next full backup.So while a database may have 50 records inserted a day (and thus over a year expand to 365*50 records), the log only needs to cope with 5*50 records.With more activity between backups (if extra modules/users come on board), the log will grow....but it doesn't make sense to allow the log to contain every change since the year dot....it only needs to track changes since the last backup. |
 |
|
|
Sarat
Constraint Violating Yak Guru
265 Posts |
Posted - 2003-05-08 : 13:36:33
|
| I am a bit confused. but i think i understood your point. are you saying that since we do full backups, i should be only concerned about the size it grows between the full backups? But, if I ignore the fact that a log file should be 25% (cause i have OLTP intensive), could the small log file in future affect performance?also regarding what you stated below, wouldn't changing log file size affect performance? i do 2 full backups every day. what do you mean by purging?"The log will start off small/empty and grow during the week tracking all database changes....until the next full backup. Then after the purge, it can be reset back to a small size and left to grow until the next full backup. "anyway,The reason I said the log file will increase over 5 years is purely based on the fact that a log file should be atleast 20 - 25% of data file so if data file increases, will 25% of data file not be more than what it is now?i agree with what you said but i think i won't say 'no' to additional space unless they question this issue.thanks,Sarat. |
 |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2003-05-08 : 13:39:53
|
quote: anyway,The reason I said the log file will increase over 5 years is purely based on the fact that a log file should be atleast 20 - 25% of data file so if data file increases, will 25% of data file not be more than what it is now?
20-25% is just an average. It may not at all represent your environment.If your transaction log gets too big, then instead of doing another full backup, just backup the log more often. We backup ours every 15 minutes and only do full backups once per day.Tara |
 |
|
|
|
|
|
|
|