|
|
Question : Capacity planning
|
|
What all factors should be considered and how they are related while doing a capacity planning for a new setup (while creating the new database) . pertaining to 1) Space requirements (sizing) .like how much space should be allocated ,what should be the sizes of datafiles and why 2) SGA (sizing ). like how much should be allocated initially and why ,
|
Answer : Capacity planning
|
|
Basic capacity planning should take into account the following:
1. Table structures and the max length of each row. You can use the average length of each row, if you have sufficient data populated. 2. Determine maximunum number of rows per table (aka Table growth) over a fixed period of time (e.g. 1 day, 1 month, 1 year, etc.) [If you have indexes there is an overhead of about 5-15% more on this later ...] 3. Determine how many tablespaces you will have, and the approximate size and content in each tablespace. 4. Determine how much data you wish to keep (if you have a process to purge data, how much data will be removed from the database) over a fixed period. 5. How much system resources (RAM, Disk, SGA, etc.) are you allocating the database? 6. Oracle overhead. Oracle typically incurrs a small overhead. Depending on the complexity (i.e. lots of PL/SQL, triggers, blobs, etc.) it can vary between 5-15% conservatively. 7. Growth factor. The completed sizing (1-6) would cover your data for a fixed period. Assume that this total represents a percentage of your total database, e.g. 70%. Thus you need to ensure that there's 30% let for growth(70-30 rule). In the case of a large Data warehouse, you may opt for the 20-80 rule (20% used, 80% space for growth).
Once you know the sizing of the database, plan out the datafile layout. There is a file size limitation with 32-bit systems. You are limited to 2GB each. With 64-bit system you got no filesize limitation, but lot of things to worry about. Bigger the file more time it takes to do I/O against it, more time it takes to back it up, more time it takes to restore it. You cannot use parallelism against it. So you got to find the right sizing based on the resources available on the server. Unless your database is huge (>250GB) I would stick with 2GB for datafile size.
SGA sizing is dependent on your Application requirements. If it is OLTP, configure a large shared_pool. If its DSS, then you need a huge buffer cache. 10g makes this planning obselete by self tuning, if you let it. All you need is give it a chunk of shared memory and it adjusts it among various components of SGA. The 2 initialization parameters are -
SGA_TARGET PGA_AGGREGATE_TARGET
On a 64-bit system, oracle goes for 40% of the system memory by default to allocate SGA and PGA. Now if your system is a shared server, then that percentage may not be viable. So depending on the free RAM, choose an appropriate size for SGA_TARGET and PGA_AGGREGATE_TARGET based on the size of the database. You can always tune it later with OEM tools. I would go with these numbers.
for DB < 250GB SGA_TARGET = 1GB PGA_AGGREGATE_TARGET = 400GB
|
|
|
|
|