Pages

Thursday, 5 April 2012

11g DATA GUARD ADMINISTRATION AND SETUP TIPS


Assume we have a production database i.e "PC01PRMY" which is keep up and running in 24x7 environment and their master DBA's has decided to setup physical standby i.e. "PC01SBY1" at remote location for disaster recover solutions as well as to make production available all the time while routine maintenance operations on production.



I have already posted on our blog, how to setup physical standby database for production but this time I would like to share few tips while setup and while administrating DR site.



Tips 1> Better to use Active database duplication to setup standby database in spite of using backup copy. As you know about Active database duplication where duplicate database is created using online copy of production data files and in fact it doesn't require any staging area as well and we also don't need to apply any redo logs. When you use active database duplication better to perform some log switching manually on primary side cause if there is no log switching happened so far on primary side, in this case your duplicate database failed and you have to restart it after cleanup everything.



Tips 2> Configure Oracle Net properly between Primary and Standby because what I found students normally does mistakes there and later on we face lot of difficulties to resolve them, especially when your database managed by Oracle restart where we have separate grid and database home and default listener is running from grid home.



Oracle Net configuration includes three main steps-



Configure Listener (Keep in mind Listener home and port number)

Register database service with Listener (Use correct service name and Oracle database home)

Configure TNS alias for database service (Use correct hostname, Listener port number and database service name)



Tips 3> Though LOG_ARCHIVE_DEST_STATE_N parameter default value is ENABLE but still I recommend you to set this parameter manually.



Tips 4> Better to setup Standby Redo log groups in advance on primary side. (You know better why?)



Tips 5> Better to configure LOCAL_LISTENER parameter manually and set the address of those Listener which we are using for Oracle Net connectivity so that PMON will register a service with the name "db_unique_name_DGB.db_domain" automatically at run time which is use by broker internally for inter-communication.



Tips 6> If you are managing Data Guard using Broker then please consider followings points-



=> when you remove database or entire broker configuration using REMOVE DATABASE or REMOVE CONFIGURATION better to use PRESERVE DESTINATIONS options else when you remove the database from configuration broker will reset their corresponding parameters like LOG_ARCHIVE_DEST_N, LOG_ARCHIVE_CONFIG etc.



=> Use SPFILE if you are managing Data Guard using broker and modify archive dest attributes through broker only else there would be inconsistency in broker configuration and spfile settings if we modify using ALTER SYSTEM command.



=> Identify inconsistent attribute settings between broker and spfile using SHOW DATABASE <DB_NAME> INCONSISTENTPROPERTIES. If you want to remove inconsistency with broker values then just disable and enable the database using broker and if you want to remove inconsistency with spfile values then update the values using EDIT DATABASE broker command.



=> Sometimes we need to recreate the broker configuration but even after recreating broker reads previous broker configuration from memory. To overcome from this problem you have to change broker configuration files location just to enforce broker to read new configuration settings.

--------------------------------------------------------------------------------------------------------

Related links:


HOME


11G DATA GUARD NEW FEATURES
DATA GUARD SETUP

RECOVERY USING STANDBY DATABASE
http://koenigocm.blogspot.in/search/label/RMAN

Friday, 23 March 2012

11G DATA GUARD NEW FEATURES


Real-Time Query. In Oracle Database 8i it was possible to bring a standby database into READ ONLY mode so that it could be used for reporting purposes, but it was necessary to switch it back to standby mode for reapplication of pending change vectors from the archived redo logs transported from the primary database. Oracle Database 11g now lets me run queries in real time against any physical standby database without any disturbance to receipt and application of redo.



Snapshot Standby Databases. Oracle Database 11g offers another intriguing prospect: the ability to open a physicalstandby database for testing or QA purposes while simultaneously collecting production changes for immediate reapplication in case disaster recovery is required. This snapshot standbydatabase still accepts redo information from its primary, but unlike the first two standby types, it does not apply the redo to the database immediately; instead, the redo is only applied when the snapshot standby database is reconverted back into a physical standby. This offers significant leverage because in theory, a QA environment that requires specifically dedicated, identical hardware is no longer required.



Improved Handling of Role Transitions. The addition of standby snapshot databases brings the total of different Data Guard standby database types to three (physical, logical, and snapshot), so Oracle Database 11g also makes it much easier to transition between these different roles via either Data Guard Broker (DGB) command line execution or Enterprise Manager Grid Control. Role transitions are simpler to execute and complete more quickly than in earlier releases.



Improvements to Rolling Database Upgrades.Oracle Database 11g supports rolling database upgrades to be performed against a physical standby database by first transforming it into a logical standby database with a few simple commands before the upgrade begins. Once the upgrade is done, the logical standby database is reverted to its original physical standby state. Oracle 11gleverages this capability as well as the improved speed and simplicity of role transitions to perform system and database patching in a fraction of the time it would’ve taken in earlier releases, and it’s especially powerful in a Real Application Clusters (RAC) database environment.



SQL Apply Enhancements. Logical standby databases are obviously central to these new role transition features, but they use SQL Apply technology to apply change vectors to data. It therefore makes sense that Oracle Database 11g provides significant improvements to this crucial part of Data Guard architecture. SQL Apply now supports parallel DDL execution, Fine-Grained Auditing (FGA), Virtual Private Database (VPD), and Transparent Data Encryption (TDE), as well as simpler real-time SQL Apply reconfiguration and tuning.



Enhanced Redo Logs Transport. Physical standby databases have always used archived redo logs for application of change vectors to data. Oracle Database 11g augments redo transport with some long-overdue features, including compressionand SSL authentication of redo logs while they’re being transmitted between the primary and standby sites.



Heterogeneous DataGuard., Oracle Database 11g allows the primary and standby databases to use different operating systems (for example, Windows 2003 Server and Oracle Enterprise Linux) as long as both operating systems support the same endianness.



Fast Start Failover Improvements. Oracle introduced this feature set in Release 10gR2, but it’s been enhanced significantly in Oracle 11g to permit much finer-grained control over the conditions under which a fast-start failover would be initiated.



“Live Cloning” of Standby Databases. Finally, Oracle 11g has made it extremely simple to set up a standby database environment because Recovery Manager(RMAN) now supports the ability to clone the existing primary database directly to the intended standby database site over the network via the DUPLICATE DATABASE command set while the target database is active. This means it’s no longer necessary to first generate, then transmit, and finally restore and recover RMAN backups of the primary database on the standby site via tedious (and possibly error-prone!) manual methods; instead, RMAN automatically generates a conversion script in memory on the primary site and uses that script to manage the cloning operation on the standby site with virtually no DBA intervention required.
------------------------------------------------------------------------------------------------------------

Related links:


HOME


RECOVERY USING STANDBY DATABASE
http://koenigocm.blogspot.in/search/label/RMAN

Friday, 24 February 2012

ORACLE 11GR2 DATABASE FLASH CACHE

Hi Dba's I was working on Database flash cache and found it very interesting feature of 11G so thought to share with you as well.  well database flash cache works as l2 level of cache for database and is very useful when memory become a bottleneck  in oltp-kinda database  by these feature we can configure a flash drive(either cheap as pen drive or may be expensive high quality flash drive) to  store clean buffer  that aged out from db_buffer cache
NOTE:-Database flash cache feature can only be configured in ORALCE ENTERPRISES LINUX or SOLARIS.
TO  be able to use database flash cache you should follow these steps.
1.login to database and shutdown the system in normal mode.
sql>shutdown immediate
2. Install Patch 9352237 by using opatch utility
[oracle@node1  9352237]$/u01/app/oracle/product/11.2.0/db_home/Opatch/opatch apply.
3.Install patch 8974084 which allows you to use the database flash cache on Oracle Enterprise Linux even if you don't have exadata storage.  The patch is named:
§     8974084:META BUG FOR FLASH CACHE 11.2PL BUGS TO BACKPORT TO 11.2.0.1 OEL


Once you install the patch you can use any old flash device as a database flash cache if  you just want to experiment using an USB flash device, then you first need to get that device mounted. On my test machine I created a directory "/mnt/usbflash" then created an /etc/fstab entry like this:
   /dev/sda1               /mnt/usbflash           vfat    noauto,users,rw,umask=0 0 0
On your system you might need to change "/dev/sda1" to another device depending on how your fixed disks are configured.  You should then be able to mount the flashdrive by typiing "mount /dev/sda1".  Make sure that the mount point is writable by oracle (chmod 777 /mnt/usbflash). 
Once mounted, you configure the flash cache by setting the parameters DB_FLASH_CACHE_FILE and DB_FLASH_CACHE_SIZE 
alter system set db_flash_cache_size='/mnt/usbflash/oraflashcache.dat;'
alter system set db_flash_cache_size=1000M;
Note that the value of DB_FLASH_CACHE_FILE needs to be a file on the flash drive, not the flash drive mount point itself.
Once these parameters are set, the flash cache will be enabled and will act as a secondary cache to the buffer cache.  When a block is removed from the primary cache, it will still exist in the flash cache, and can be read back without a physical read to the spinning disk.
Monitoring

There's a few ways to examine how the flash cache is being used.  Firstly,  V$SYSSTAT contains some new statistics showing the number of blocks added to the cache and the number of "hits" that were satisfied from cache:

SELECT   name,value   FROM   v$sysstat  WHERE   name IN ('physical read flash cache hits',
                  'physical reads',  'consistent gets',  'db block gets',  'flash cache inserts')
Although the flash reads save time when compared to db file sequential reads, the overhead of maintaining the cache can be high because flash based SSD has a relatively severe write penalty.
All flash-based Solid State Disk have issues with write performance.  However, cheap Multi Level Cell (MLC) flash take about 3 times as long to write as the more expensive Single Level Cell (SLC).  When flash drives are new, the empty space can be written to in single page increments (usually 4KB).  However, when the flash drive is older, writes typically require erasing a complete 128 page block which is very much slower.  My cheap USB drive was old and MLC, so it had very poor write performance.   But even the best flash based SSD is going to be much slower for writes than for reads, and in some cases using a  flash cache might slow a database down as a result.  So monitoring is important.
There's a couple of other V$SYSSTAT statistics of interest:

select name,value from v$sysstat where name like '%flash cache%'
To examine the contents of the cache, we can examine the V$BH view.  Buffers in the flash cache have STATUS values such as 'flashcur', allowing us to count the buffers from each object in the main buffer cache and in the flash cache
   SELECT   owner || '.' || object_name object, SUM (CASE WHEN b.status LIKE 'flash%' THEN 1 END) flash_blocks,SUM (CASE WHEN b.status LIKE 'flash%' THEN 0 else 1 END) cache_blocks,count(*) total_blocks  FROM   v$bh b  JOIN   dba_objects   ON (objd = object_id)  GROUP BY   owner,  object_name order by 4 desc
Conclusion
  We should be cautious however, because the flash write penalty may result in performance problems.

Related links:

HOME



Saturday, 21 January 2012

I/O CALIBRATION IN 11G


A very good evening from chandigarh (INDIA) ... Here in koenig we come with another powerful post for 11G Oracle database's one of the most helpful goodies known as I/O calibration. In this post i have focused to make understand a DBA the concept of this calibration and how he can exploit such a wonderfull tool, I first got to know about this tool last year when i first read a blog of Vinay Pandey (The Oracle Security Vault Administrator) .. So here is my explanation for this feature...

At any given time a DBA is looking to establish a solid I/O subsystem for an applocation. If any component in the I/O stack is limited in throughput, it will become the weakest link in the I/O flow. The I/O stack offcourse includes the physical disks, the storage array, the storage switches, the HBA's.

I/O Calibration is a modified version of ORION tool and is based on the asynchronous library. This feature is based on the PL/SQL function called DBMS_RESOURCE_MANAGER.CALIBRATE_IO(). It is used to evaluate the performance of the storage subsystem and determine whether I/O performance problems comes from the database host or the storage subsystem. When it is invoked it will generate I/O intensive read-only random I/O and large-block(1MB) sequential I/O workloads. Unlike other tools, this tool uses oracle code stack the software libraries to drive the I/O; not some generic I/O generator. This capability is important - you are testing the I/O for an Oracle database right ?

Finally when the job is finished you can view the results in DBA_RSRC_IO_CALIBRATE view or the I/O calibration page: maximum I/O per second, maximum megabytes per second, and average actual latency metrics.

This calibrate I/O procedure will have 2 input variables and 3 output variables.

the input variables are NUM_DISKS and MAX_LATENCY.

NUM_DISKS = To get the most accurate results, its best to provide the actual number of physical disks that are used for this database. The Storage Administrator can provide this value. Keep in mind that when ASM is used to manage the database files, say in the DATA diskgroup, then only physical disks that make up the DATA diskgroup should be used for the NUM_DISKS variable; i.e.; do not include the disks from the FRA diskgroup.

LATENCY – This should be set to the defined response time given by your service level agreement for your application. for example your 95th percentile response time SLA is 10secs. I hope you are aware of significance level thresholds.

Now the 3 output variables are MAX_IOPS,MAX_MBPS, ACTUAL_LATENCY

MAX_IOPS - the maximum number of I/O Operations per second
MAX_MBPS - the maximum Megabytes per second
ACTUAL_LATENCY - the actual latency observed during the test

To run I/O calibration and assess the I/O capability of the storage subsystem used by Oracle Database, use the

DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure:

SET SERVEROUTPUT ON
DECLARE
  lat  INTEGER;
  iops INTEGER;
  mbps INTEGER;
BEGIN
-- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (<DISKS>, <MAX_LATENCY>, iops, mbps, lat);
   DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat);

  DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
  DBMS_OUTPUT.PUT_LINE ('latency  = ' || lat);
  dbms_output.put_line('max_mbps = ' || mbps);
end;
/

When running the DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure, consider the following:


# Do not run the procedure multiple times across separate databases that use the same storage subsystem

# Quiesce the database to minimize I/O on the instance

# For Oracle Real Application Clusters (RAC) configurations, ensure that all instances are opened to calibrate the storage subsystem across nodes

# The execution time of the procedure is dependent on the number of disks in the storage subsystem and increases with the number of nodes in the database

# In some cases, asynchronous I/O is permitted for datafiles, but the I/O subsystem for submitting asynchronous I/O may be maximized, and I/O calibration cannot continue. In such cases, refer to the port-specific documentation for information about checking the maximum limit for asynchronous I/O on the system. The calibration will run in different phases. In the first phase, small block random I/O workload is performed on each node and then concurrently on all nodes. The second phase will generate large block sequential I/O on each node. Note, that the Calibrate I/O is expecting that a datafile is spread across all disks specified in NUM_DISKS variable. Furthermore, offline files are not considered for file I/O.

I also felt the need to explain the following metrics which were just thrown above :=

IOPS (I/O per second):= This metric represents the number of small random I/O that can be serviced in a second. The IOPS rate mainly depends on how fast a disk media can spin.

MBPS (megabytes per second) := The rate at which data can be transferred between the computing server node and the storage array depends on the capacity of I/O channel that is used to transfer data. More data can be transferred through a wider pipe.

I/O Latency: Traditionally it means the time it takes to access a particular sector on the disk. But from database point of view it means all the time it takes for a submitted I/O request to be serviced by the storage. A high latency usually indicates an overloaded system.




Have a nice day !!!

Related links:


HOME

Monday, 16 January 2012

MMON WILL TRY TO PURGE OLD DATA FROM SYSAUX

Hello DBA's, specially Karan, Well i just had a strong sweet coffee and feeling good  

today I found a very  interesting thing in Alert log file that indicating that whenever SYSAUX tablespace becomes full then MMON will try to purge old data from SYSAUX tablespace to make free space for other data.

Please look the contents that I found in Alert Log file :
---------------------------------------------------------------


Tue Jan 17 10:00:03 2012
ORA-1688: unable to extend table SYS.WRH$_ACTIVE_SESSION_HISTORY partition WRH$_ACTIVE_2503051407_14753 by 128 in tablespace SYSAUX
MMON Flush encountered SYSAUX out of space error(1688).
MMON (emergency) purge of WR snapshots (14690) and older



Related links:

HOME

Friday, 13 January 2012

MUTEXES AND LATCHES


Well my friend ashish (A dangerous DBA at koenig) was literally asking me everyday to post some stuff on why oracle is switching to mutexes, so here i come with my post on mutexes and latches too, I have made this post simpler while not going much into details so that beginners can also cherish every moment too and also its a nice sunshine in new delhi despite its cold here, but no coffee here its sad though.. Here we go ..

As we know that mutex is a mutual exclusion and they are the concept of multiple threads, both latches and mutexes are used for low level locking, mutexes again are used to serialize access to shared structures, most importantly they can be taken in shared or exclusive mode and also getting a mutex can de done in wait or no-wait mode.

Let me post few of my notes on mutexes which i discussed with Rajeev Ranjan and also from Tanel Poder at the oracle forum :=

1) The advantages over latches is that mutexes requires less memory and are faster to get and release.
2) Why faster ? Because the code path to get and release is shorter
3) Acquired in the same mode of immediate or willing to wait as latches ? The answer is (YES)
4) To me and Mr. RANJEEV RANJAN it seams that since mutexes can do the same things as latches do by using less memory and in a more performant way, there are already enough good reason for using them....

Some notes from TANEL PODER (An Oracle GURU)

1) Library cache latching is still needed for parsing etc, the mutexes address only the pinning issue in library cache
2) Mutexes are currently used for library cache cursors (not other objects like PL/SQL stored procs, table defs etc)
3) As mutexes are a generic mechanism (not library cache specific) they're used in V$SQLSTATS underlying structures too
4) When mutexes are enabled, you won't see cursor pins from X$KGLPN anymore (as X$KGLPN is a fixed table based on the KGL pin array which wouldn't be used for cursors anymore)

AS for now few DBA's must be still not be convinced i guess if you really know how object handles,cursors,child cursors, pinning work in the shared pool, for them i have written the following lines

# The actual benefit comes is that for each child cursor handle mutex actually acts as a cursor pin structure, so if you have a cursor open (cached in session cursor cache) you dont need to get the library cache latch (which was previosuly needed for chaning cursor pin status), but you can modify the cursor's mutex refcount directly (with help of pointers in open cursor state area in sessions UGA). Therfore you have much higher scalability when pinning/unpinning cursors (no library cache/library cache pin latching needed, therfore no false contention) and no separate pin structure need to be allocated/maintained.

If you have understood what i just said above, yes it can be leading to an assumption to a false contention thing like suppose there are lots of sessions looking for statements in the shared pool then the library cache will come under contention, now this is a false contention if we just look at if sessions are looking for soft parsed cursor then its a good thing but because the looking mechanism, latch is not available so its actually under contention which will be removed with the help of mutexes as they will be allocated as per structure so there wont be any starvation issue like in latches.

In Oracle 10.2.0.2+ the library cache pin latch usage was replaced with mutexes whenever _kks_use_mutex_pin was true, also few other things like V$SQLSTATS arrays and parent cursor examination were protected by mutexes.

In 11g all library cache related latches except “library cache load lock” are gone and corresponding operations are protected by mutexes instead. The “library cache” latches have been replaced by “Library Cache” mutexes for example.

The proof is

SQL> select name from v$latch where lower(name) like '%library%';

NAME
----------------------------------------------------------------
library cache load lock


At the end of this post i would again like to write some basic concepts if some of you need a revision sort of thing ...

Mutexes, Yes they have less potential for false contention because latches typically protect multiple objects. When a latch protects one or more hot objects, the latch itself becomes a serialization point when accessing any of the objects protected by that latch. This can be a false contention point where the contention is for protection mechanism (LATCH) rather than the target object, Because mutexes can be created for each structure protected false contention is less likely.

A mutex can ce concurrently referenced by many sessions, provided all sessions refer to it in S (Shared mode). Total numver of sessions referencing a mutex in S mode is called reference count (REF COUNT). For exclusive it is X (exclusive mode) which can be held by only 1 session. Mutexes have a dual nature, they can act as a serialization mechanism like latch and also a pin (for eg:= preventing an object from aging out). the ref count of a mutex is a replacement for library cache pin, instead of each session creating and then deleting a library cache pin when executing a cursor, each session increments and decrements the ref count (so the ref count replaces n distinct pins).... Have a nice day

--------------------------------------------------------------------------------------------------------

Related links :


HOME


LATCHES LOCKS PINS AND MUTEXES
MUTEXES AND LATCHES
PARENT AND CHILD CURSORS IN ORACLE
QUICK OVERVIEW OF SHARED POOL TUNING
SHARED POOL ARCHITECTURE

SQL PLAN MANAGEMENT – ORACLE 11g : Part-1
 http://koenigocm.blogspot.in/search/label/SQL%20PLAN%20MANAGEMENT%E2%80%93%20ORACLE%20%2011g%20%3A%20Part-1



SQL PLAN MANAGEMENT – ORACLE 11g : Part-2

 LIBRARY CACHE LOCK/PIN DEMONSTRATED
 http://koenigocm.blogspot.in/search/label/LIBRARY%20CACHE%20LOCK%2FPIN%20DEMONSTRATED


SHARED POOL TUNING - A DEMONSTRATION




Thursday, 12 January 2012

SOME IMPORTANT TUNING TIPS

UGA and Oracle Shared Server: The point about UGA is that Shared server sessions take space from the available space in the shared pool for SQL and PL/SQL objects. Example the Shared pool may be tuned properly with 20 users but not large enough for 200 users. The space in the large pool can be set aside for the UGA eliminating this tuning factor. The streams pool should be configured if you are using Oracle Streams. If the streams pool is NOT configured Streams will use the Shared pool, When streams uses 10% of the shared pool Streams starts spilling to disk, so Streams could use more than than 10% of the shared pool. To avoid the memory sizing problems, fragmentation, and contention, configure the Streams Pool.

Large Pool: RMAN only uses the Large pool when BACKUP_TAPE_IO_SLAVE=TRUE

Database High Availability: Best Practices: Flashback Database can have a 1-2% overhead on write intensive systems.

Database High Availability: Best Practices: These practice are high recommended for HA databases, and are often implemented in other databases: Block Checking includes two parameters DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING. Major difference is that DB_BLOCK_CHECKSUM calculates the checksum to discover corruptions in block transfers (I/O), DB_BLOCK_CHECKING checks the internal consistancy of the block, by going through the data in the block, and can discover other types of corruptions. From Metalink Note: 336194.1 DB_BLOCK_CHECKSUM can be OFF (or) TYPICAL (or) FULL. OFF - DBWn calculates checksums only for the SYSTEM tablespace, but not for user tablespaces. TYPICAL (default) - Turning on this feature in TYPICAL mode causes only an additional 1% to 2% overhead. Oracle recommends that you set DB_BLOCK_CHECKSUM to TYPICAL. For backward compatibility we preserve use of TRUE (implying TYPICAL) and FALSE (implying OFF) values. FULL – In FULL mode setting Oracle will compute the checksum before any change and compare the checksum to the stored value and it causes 4% to 5% overhead. This approach uncovers any data corruption in the memory itself and reports errors at that point—which is highly useful for preventing data corruption at the disk level as well as its propagation to the standby database.

From the Reference guide DB_BLOCK_CHECKING controls whether or not Oracle performs block checking for database blocks. The checking that is performed depends on the value you supply, as follows: OFF (default) - no block checking is performed for blocks in the user tablespaces. However, semantic block checking for SYSTEM tablespace blocks is always turned on. LOW - basic block header checks are performed after block contents change in memory (for example, after UPDATE or INSERT statements, on-disk reads, or inter-instance block transfers in RAC) MEDIUM - all LOW checks are performed, as well as semantic block checking for all non-index-organized table blocks FULL - all LOW and MEDIUM checks are performed, as well as semantic checks for index blocks (that is, blocks of subordinate objects that can actually be dropped and reconstructed when faced with corruption) Oracle checks a block by going through the data in the block, making sure it is logically self-consistent. Block checking can often prevent memory and data corruption. Block checking typically causes 1% to 10% overhead, depending on workload and the parameter value. The more updates or inserts in a workload, the more expensive it is to turn on block checking. You should set DB_BLOCK_CHECKING to FULL if the performance overhead is acceptable. For backward compatibility the use of FALSE (implying OFF) and TRUE (implying FULL) is preserved.

Internal Fragmentation Considerations: The problem with all of these scenarios in the first set of bullets is that they leave empty space below the HWM. If PCTFREE is set too high the space is never used, too low and you get migrated rows. LOBs use space inside the LOB for versioning instead of undo. The amount of space that is used depends on PCTVERSION and RETENTION. PCTVERSION integer - Specify the maximum percentage of overall LOB storage space used for maintaining old versions of the LOB. The default value is 10, meaning that older versions of the LOB data are not overwritten until they consume 10% of the overall LOB storage space. You can specify the PCTVERSION parameter whether the database is running in manual or automatic undo mode. PCTVERSION is the default in manual undo mode. RETENTION is the default in automatic undo mode. You cannot specify both PCTVERSION and RETENTION. RETENTION: Use this clause to indicate that Oracle Database should retain old versions of this LOB column for an amount of time. Oracle Database uses the value of the UNDO_RETENTION initialization parameter to determine the amount of committed undo data to retain in the database. You can specify the RETENTION parameter only if the database is running in automatic undo mode. In this mode, RETENTION is the default value unless you specify PCTVERSION. You cannot specify both PCTVERSION and RETENTION. If there is significant update activity on LOBs the amount of space used by old versions of the LOBs can be large. Direct loads insert above the HWM, when the load is complete the HWM is moved. Deletes remove rows below the HWM. Unless there are standard INSERTs the space released by the deletes will never be used. Update and deletes on indexed colums produce deleted leaf rows in the index. These occupy reusable space in the index, but if there are few or no inserts the space will not be used.

Sizing the Redo Log Buffer: From 10gR2 onward there is a separate log buffer per cpu. There are possibly multiple redo allocation latches because each log buffer requires an allocation latch. So now there are multiple redo allocation latch children and not one single latch. The log file sync wait event P1 has the buffer number experiencing the wait.

Related links:


HOME