Important questions

  1. 1.       What happens during begin backup mode?

Below activity happen during begin backup mode.

  • DBWn checkpoints the tablespace (writes out all dirty blocks as of a given SCN)
  • CKPT stops updating the Checkpoint SCN field in the datafile headers and begins updating the Hot Backup Checkpoint SCN field instead.
  • LGWR begins logging full images of changed blocks the first time a block is changed after being written by DBWn.

Those three actions are all that is required to guarantee consistency once the file is restored and recovery is applied. By freezing the Checkpoint SCN, any subsequent recovery on that backup copy of the file will know that it must commence at that SCN. Having an old SCN in the file header tells recovery that the file is an old one, and that it should look for the archivelog containing that SCN, and apply recovery starting there.

Note that during hot backup mode, checkpoints to datafiles are not suppressed. Only the main Checkpoint SCN flag is frozen, but CKPT continues to update a Hot Backup Checkpoint SCN in the file header.

By initially checkpointing the datafiles that comprise the tablespace and logging full block images to redo, Oracle guarantees that any blocks changed in the datafile while in hot backup mode will also be present in the archivelogs in case they are ever used for a recovery. Most of the Oracle user community knows that  Oracle generates a greater volume of redo during hot backup mode. This is the result of Oracle logging of full images of changed blocks in these tablespaces. Normally, Oracle writes a change vector to the redologs for every change, but it does not write the whole image of the database block. Full block image logging during backup eliminates the possibility that the backup will contain unresolvable split blocks. To understand this reasoning, you must first understand what a split block is.

Typically, Oracle database blocks are a multiple of O/S blocks. For example, most Unix filesystems have a default block size of 512 bytes, while Oracle’s default block size is 8k. This means that the filesystem stores data in 512 byte chunks, while Oracle performs reads and writes in 8k chunks or multiples thereof. While backing up a datafile, your backup script makes a copy of the datafile from the filesystem, using O/S utilities such as copy, dd, cpio, or OCOPY. As it is making this copy, your process is reading in O/S-block-sized increments. If DBWn happens to be writing a DB block into the datafile at the same moment that your script is reading that block’s constituent O/S blocks, your copy of the DB block could contain some O/S blocks from before the database performed the write, and some from after. This would be a split block. By logging the full block image of the changed block to the redologs, Oracle guarantees that in the event of a recovery, any split blocks that might be in the backup copy of the datafile will be resolved by overlaying them with the full legitimate image of the block from the archivelogs. Upon completion of a recovery, any blocks that got copied in a split state into the backup will have been resolved by overlaying them with the block images from the archivelogs.
All of these mechanisms exist for the benefit of the backup copy of the files and any future recovery. They have very little effect on the current datafiles and the database being backed up. Throughout the backup, server processes read datafiles DBWn writes them, just as when a backup is not taking place. The only difference in the open database files is the frozen Checkpoint SCN, and the active Hot Backup Checkopint SCN.

 

2. How to recover when redolog file is corrupted? Many scenarios :

Recover database when online redolog file is corrupted.

Oracle Database can be operates in two modes.

1. Archive log mode (The archive logs are generated and online backup is possible)

2. No archive log mode (No archive log generated and cold backup is only possible)

 

The online redolog file is the physical file which is updated with redo information (changed information in the database) from redolog buffer by LGWR

 

background process. The online redo log files can be of different states.

1 .INACTIVE

2. ACTIVE

3. CURRENT

4. UNUSED

 

CURRENT: – The online redolog file to which LGWR is currently writing the redo information.

ACTIVE: – The online redolog file to which the LGWR is about to write the redo information once completed with the current online redolog file.

INACTIVE: – The online redolog file to which LGWR has finished its writing.

UNUSED: – The online redolog file which is not used by LGWR to write the redo information.

 

Now we are going to discuss how to recover the database when online redolog file is corrupted due to some reason. We would have various scenarios, I have

 

divided the scenarios in terms of archive log mode and no archivelog mode. I would try to cover most of the scenario.

v    No Archivelog mode:-

 

a)      Online redolog file in INACTIVE and UNUSED state:-

This would not require crash recovery. Clear the logfile group and recreate the group manually by deleting the log file.

Commands:-

SQL> ALTER DATABASE CLEAR LOGFILE GROUP ;

SQL>ALTER DATABASE DROP LOGFILE GROUP ;

Delete the redolog file from the physical location.

SQL>ALTER DATABASE ADD LOGFILE GROUP (‘/oracle/dbs/log1c.rdo’, ‘/oracle/dbs/log2c.rdo’)  SIZE 10M;

 

b)     Online redolog file in ACTIVE state:-

This would require crash recovery. Issue a checkpoint.

Command:-

SQL>ALTER SYSTEM CHECKPOINT;

If the check point failed, then please do a complete recovery of the database from the last full backup. The steps to perform complete recovery are explained

 

in the complete recovery section.

 

c)      Online redolog file in CURRENT state:-

The LGWR will be terminated in this case and you have to perform complete recovery from cold backup.

 

 

v    Archivelog mode:-

a)      Online redolog file in UNUSED state:-

The steps would be same as that of no archivelog mode.

 

b)     Online redolog file in INACTIVE state:-

Check your redolog file is archive or not.

Command:-

SQL>Select group#, archived from v$log;

 

Yes:-follow the same step as noarchive log mode.

No:- Issue the below command.

SQL>ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP ;

After this command there would be a gap in the archive log. So do complete recovery from backup.

c)      Online redolog file in ACTIVE state:-

Perform a check point and if it fails then perform a complete recovery.

Command:-

SQL>ALTER SYSTEM CHECKPOINT;

d)     Online redolog file in CURRENT state:-

The LGWR will be terminated in this case. We would have to perform incomplete recovery.

 

 

 

 

 

 

 

3.Difference between adpatch & opatch ?

adpatch is utility to apply oracle apps Patches.
opatch is utility to apply database interim patches.

4.Difference between interim patches and patchset?

Interim patch apply for any existing bugs in software/database by using opatch utility. Opatch update files in ORACLE_HOME and patchset apple to upgrade existing oracle release patchset by using Oracle Universal Installer. Patchset upgrade database objects. Example: Upgrade database to new version or patchset (10.2.0.3 to 10.2.0.4)

5. How to check the Opatch version?

Step 1: Set ORACLE_HOME
Step 2: go to /OPatch directory
Step 3: execute:
$ ./opatch version

6. How to Apply OPatch?

$ORACLE_HOME/OPatch/opatch apply

 

7. What is the difference between opatch napply and opatch apply?What is cpu patch?

Ans :  When we are applying only patch we are using opatch whereas napply is used when multiple patch files are used.

Critical Patch Updates are the primary means of releasing security fixes for Oracle products to customers with valid support contracts. They are released on the Tuesday closest to the 17th day of January, April, July and October.

8. Location of Opatch Log files?

$ORACLE_HOME/cfgtoollogs/opatch

9.Can we apply OPatch, if Oracle Inventory has been missed?

No.

10. How to Apply OPatch if Oracle Inventory has been missed?

Need to re-generate Oracle Inventory.

11. Why Database Inventory is required at the time of applying OPatch?

Oracle Inventory is the place where all the information about Oracle Home is stored. OPatch require reading this information to apply patches.

12. Location of Inventory file?

The location of inventory is defined in oraInst.loc file. The path for this file is provided while installing Oracle Home.

If we dont supply any path, the file will be present at central location oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris)

$ cat oraInst.loc

13. Can we change local inventory location at the time of Installation?

We can create local inventory at any location while installing Oracle Home. We need to use -invPtrLoc variable along with runInstaller command and give the location of oraInst.loc. Otherwise the default path assumed for oraInst.loc is /etc/oraInst.loc.

./runInstaller -invPtrLoc /slot/ems2029/appmgr/apps/tech_st/10.1.2/oraInst.loc

14. Difference between local and global inventory?

Global inventory is where your all oracle home on that server is registered. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server

Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.

15. How do I find list of oracle product on machine?

check for file inventory.xml under ContentsXML in oraInventory (Please note if you have multiple global Inventory on machine check all oraInventory directories)

16. How to re-create Global Inventory if corrupted?

We can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option -attachHome

$./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc ORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”

17. How to Rolling back a patch?

Suppose, if you have applied wrong patch or patch did not fix the issue, we can rollback the patch.

Example:

$ORACLE_HOME/OPatch/opatch rollback –id

18. How to Applying bundle patches?

Bundle patches having many patches. Bundle patches containing many other patches to Oracle Home. We can apply bundle patches to Oracle Home using a single command.

Example: (if you want to apply all patches)

$ORACLE_HOME/OPatch/opatch until NApply -phBaseDir /export/home/oracle/ -invPtrLoc /export/home/oracle/oraInst.loc

NApply -> Will apply all patches to RDBMS home.

Example: (if you want to apply specify patches)

$ORACLE_HOME/OPatch/opatch until NApply /export/home/oracle/ -id 1,2,3 –skip_subset –skip_duplicate

This will apply patches 1, 2, and 3 which are under < the patch_location> directory. OPatch will skip duplicate patches and subset patches (patches under that are subsets of patches installed in the ORACLE_HOME)

19. How to know which version is the correct version of Opatch for your RDBMS home?

Read Metalink Note: 357221.1

20. How to Create Oracle Home Inventory in 10g

cd $ORACLE_HOME/oui/bin
./runInstaller -attachHome ORACLE_HOME=”/export/home/oracle/oracle/db_1/10.2.0″ ORACLE_HOME_NAME=”Oracle10g_home1″

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

21)What are the steps involved during the sql statement exectuion?

 Parsing SQL Statements in Oracle

One of the first steps Oracle takes in processing a SQL statement is to parse it. During the parsing phase, Oracle will break down the submitted SQL statement into its component parts, determine what type of statement it is (Query, DML, or DDL), and perform a series of checks on it. Two important concepts for the DBA to understand is (1) what are the steps involved in the parse phase and (2) what is the difference between a hard parse and a soft parse.

The Syntax Check & Semantic Analysis

The first two function of the parse phase Syntax Check and Semantic Analysis happen for each and every SQL statement within the database.

Syntax Check: Oracle checks that the SQL statement is valid. Does it make sense given the SQL grammar documented in the SQL Reference Manual? Does it follow all of the rules for SQL?

Semantic Analysis:  This function of the parse phase, takes the Syntax Check one step further by checking if the statement is valid in light of the objects in the database. Do the tables and columns referenced in the SQL statement actually exist in the database? Does the user executing the statement have access to the objects and are the proper privileges in place? Are there ambiguities in the statement? For example, consider a statement that references two tables emp1 and emp2 and both tables have a column name. The following statement “select name from emp1, emp where…” is ambiguous; the query doesn’t know which table to get name from.

Although Oracle considers the first two functions of the parse phase (checking the validity of the SQL statement and then checking the semantics to ensure that the statement can be properly executed), the difference is sometimes hard to see from the users perspective. When Oracle reports an error to the user during the parse phase, it doesn’t just come out and say “Error within the Syntax Function” or “Error within the Semantics Function”.

For example, the following SQL statement fails with a syntax error:

SQL> select from where 4;

select from where 4

*

ERROR at line 1:

ORA-00936: missing expression

Here is an example of a SQL statement that fails with a semantic error:

 

SQL> select * from table_doesnt_exist;

select * from table_doesnt_exist

*

ERROR at line 1:

ORA-00942: table or view does not exist

 

Hard Parse vs. Soft Parse

 

We now consider the next and one of the most important functions of Oracle’s parse phase. The Oracle database now needs to check in the Shared Pool to determine if the current SQL statement being parsed has already been processed by any other sessions.

If the current statement has already been processed, the parse operation can skip the next two functions in the process: Optimization and Row Source Generation. If the parse phase does, in fact, skip these two functions, it is called a soft parse. A soft parse will save a considerable amount of CPU cycles when running your query. On the other hand, if the current SQL statement has never been parsed by another session, the parse phase must execute ALL of the parsing steps. This type of parse is called a hard parse. It is especially important that developers write and design queries that take advantage of soft parses so that parsing phase can skip the optimization and row source generation functions, which are very CPU intensive and a point of contention (serialization). If a high percentage of your queries are being hard-parsed, your system will function slowly, and in some cases, not at all.

Oracle uses a piece of memory called the Shared Pool to enable sharing of SQL statements. The Shared Pool is a piece of memory in the System Global Area (SGA) and is maintained by the Oracle database. After Oracle completes the first two functions of the parse phase (Syntax and Semantic Checks), it looks in the Shared Pool to see if that same exact query has already been processed by another session. Since Oracle has already performed the semantic check, it has already determined:

Exactly what table(s) are involved

That the user has access privileges to the tables.

Oracle will now look at all of the queries in the Shared Pool that have already been parsed, optimized, and generated to see if the hard-parse portion of the current SQL statement has already been done.

 

 

22)Difference between locally managed and dictionary managed tablespace?

– DICTIONARY: Specifies that the tablespace is managed using dictionary tables. This is the default in Oracle8i.

– LOCAL:  Specifies that tablespace is locally managed. This is the default in Oracle9i. Exception for the SYSTEM tablespace

Locally Managed Tablespaces: A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables (like sys.uet$, sys.fet$) in the data dictionary (except for special cases such as tablespace quota information).

Dictionary Managed Tablespaces:  In DMT, to keep track of the free or used status of blocks, oracle uses data dictionry tables. When an extent is allocated or freed for reuse, free space is recorded in the SYS.FET$ table, and used space in the SYS.UET$ table. Whenever space is required in one of these tablespaces, the ST (space transaction) enqueue latch must be obtained to do inserts and deletes agianst these tables. As only one process can acquire the ST enque at a given time, this often lead to contention. These changes generate rollback information because they update tables (like sys.uet$, sys.fet$) in the data dictionary.

 

Advantages of Locally Managed Tablespaces(LMT) over Dictionary Managed Tablespaces(DMT):

1. Reduced recursive space management

2. Reduced contention on data dictionary tables

3. No rollback generated

4. No coalescing required

 

Converting DMT to LMT:

SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local(‘ts1’);

PL/SQL procedure successfully completed.

Converting LMT to DMT:

SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local(‘ts2’);

PL/SQL procedure successfully completed.

23.How to determine what should be the size of SGA and PGA ?

SELECT table_name FROM dict WHERE table_name LIKE ‘%ADVICE%’;

V$SGA_TARGET_ADVICE

V$PGA_TARGET_ADVICE

 

24. What are the difference between active, inactive and idle session?

Inactive: means the session is connected, and idle. The client is not currently making a database call.(nothing is being done, server waits for tasks to be Assigned from the client.)

ACTIVE : Session currently executing SQL.

Sniped  : the session has passed the idle_time limit defined in user profile. The session will remain snipped until the client communicates with the db again, when it will get “ORA-02396: exceeded maximum idle time, please connect again” and the session is removed from v$session.

Sniped means the system has killed the session, but the client doesn’t know it.

25. What are the contents of alert log file?

Oracle’s alert.log chronologically records messages and errors arising from the daily database operation. Also, there are pointers to trace files and dump files.

These messages include:

  • startups and shutdowns of the instance
  • Messages to the operator console
  • Errors causing trace files.
  • Create, alter and drop SQL statements on databases, tablespaces and rollback segments.
  • Errors when a materialized view is refreshed.
  • ORA-00600 (internal) errors.
  • ORA-01578 errors (block corruption)
  • ORA-00060 errors (deadlocks)

alert.log is a text file that can be opened with any text editor. The directory where it is found can be determined by the background_dump_dest

initialization parameter:   select value from v$parameter where name = ‘background_dump_dest’;

If the background_dump_dest parameter is not specified, Oracle will write the alert.log into the $ORACLE_HOME/RDBMS/trace directory.

26. What is the difference between connection and session?

A connection is a physical communication pathway between a client process and a database instance. A communication pathway is established using available interprocesses communication mechanisms or network software. Typically, a connection occurs between a client process and a server process or dispatcher, but it can also occur between a client process and Oracle Connection Manager (CMAN).

A session is a logical entity in the database instance memory that represents the state of a current user login to a database. For example, when a user is authenticated by the database with a password, a session is established for this user. A session lasts from the time the user is authenticated by the database until the time the user disconnects or exits the database application.

A single connection can have 0, 1, or more sessions established on it. The sessions are independent: a commit in one session does not affect transactions in other sessions.

27. How to enable the trace at a session and what you will get from it?

Enabling Trace at Instance Level

Trace can be enabled when the instance is started by adding the following line to the init.ora file

  sql_trace = TRUE

This will enable trace for all sessions including background processes. Note that enabling this parameter may generate large amounts of trace and consume significant system resources

When the instance is already running, trace can be enabled directly using the ALTER SYSTEM command.

    ALTER SYSTEM SET trace_enabled = TRUE;

This will enable trace for all newly created sessions. Currently executing sessions and background processes will be unaffected.

Instance-wide trace can be disabled again using

    ALTER SYSTEM SET trace_enabled = FALSE;

Enabling Trace at Session Level:

Trace can be enabled at session level using the command :

    ALTER SESSION SET sql_trace = TRUE;

Trace is disabled at session level using

    ALTER SESSION SET sql_trace = FALSE;

Trace is enabled at session level using

    EXECUTE dbms_session.set_sql_trace (TRUE);

Trace is disabled at session level using

    EXECUTE dbms_session.set_sql_trace (FALSE);

 

28. What is the advantage of RMAN?

  • Ability to perform INCREMENTAL backups
  • Ability to recover one block of data file.
  • Ability to automatically backup CONTROLFILE and SPFILE
  • Ability to delete the older ARCHIVE REDOLOG files, with the new one’s automatically.
  • Ability to perform backup and restore with parallelism.
  • Ability to report the files needed for the backup.
  • Ability to RESTART the failed backup, without starting from beginning.
  • Much faster when compared to other TRADITIONAL backup strategies.
  • Unused block compression: In unused block compression, RMAN can skip data blocks that have never been used.
  • No redo log overhead
  • Corrupt block detection: RMAN checks for the block corruption before taking its backup.
  • Maintain backup repository: Backups are recorded in the control file, which is the main repository of RMAN metadata.  Additionally, you can store this metadata in a recovery catalog.
  • Encrypted backups: RMAN uses backup encryption capabilities integrated into Oracle Database to store backup sets in an encrypted format.

 

29. What is instance?

A database instance is a set of memory structures that manage database files. A database is a set of physical files on disk created by the CREATE DATABASE statement. The instance manages its associated data and serves the users of the database.

Every running Oracle database is associated with at least one Oracle database instance. Because an instance exists in memory and a database exists on disk, an instance can exist without a database and a database can exist without an instance.

Instance is a combination of memory structure and process structure. Memory structure is SGA and Process structure is background processes.

 

 

30. What is block corruption and how to recover it?

SQL> select * from v$database_block_corruption;

     FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE# CORRUPTIO

———- ———- ———- —————— ———

         4       1027          1                  0 ALL ZERO

or we can the detail in alert.log file too regarding file# and block#.

We can either now recover the corrupted blocks using the command

BLOCKRECOVER DATAFILE 4, BLOCK 1027

connect target /

connect catalog rmancatalog/rmancatalog@catalog

run {

allocate channel…;

blockrecover datafile 13 block 443343;

release channel …;

}

Or, if there are a number of data blocks which are corrupted, we can issue a single command

BLOCKRECOVER CORRUPTION LIST

RMAN> blockrecover corruption list;

Yes, RMAN restored the block from the last backup and then RMAN applied the archive logs. I think it would be faster to restore one block instead of restoring the whole datafile as once that block is found it wouldn’t need to restore rest of the datafile. Oracle suggests using block media recovery is not useful in cases where the extent of data loss or corruption is not known; in this case, use datafile recovery instead.

 

 

 

 

 

 

31. What are the different mode of dataguard?

Synchronous transport (SYNC) is also referred to as “zero data loss” method because the LGWR is not allowed to acknowledge a commit has succeeded until the LNS can confirm that the redo needed to recover the transaction has been written at the standby site.

In the diagram to the right the phases of a transaction are

  • The user commits a transaction creating a redo record in the SGA, the LGWR reads the redo record from the log buffer and writes it to the online redo log file and waits for confirmation from the LNS
  • The LNS reads the same redo record from the buffer and transmits it to the standby database using Oracle Net Services, the RFS receives the redo at the standby database and writes it to the SRL
  • When the RFS receives a write complete from the disk, it transmits an acknowledgment back to the LNS process on the primary database which in turns notifies the LGWR that the transmission is complete, the LGWR then sends a commit acknowledgment to the user

This setup really does depend on network performance and can have a dramatic impact on the primary databases, low latency on the network will have a big impact on response times. The impact can be seen in the wait event “LNS wait on SENDREQ” found in the v$system_event dynamic performance view.

Asynchronous transport (ASYNC) is different from SYNC in that it eliminates the requirement that the LGWR waits for a acknowledgment from the LNS, creating a “near zero” performance on the primary database regardless of distance between the primary and the standby locations. The LGWR will continue to acknowledge commit success even if the bandwidth prevents the redo of previous transaction from being sent to the standby database immediately. If the LNS is unable to keep pace and the log buffer is recycled before the redo is sent to the standby, the LNS automatically transitions to reading and sending from the log file instead of the log buffer in the SGA. Once the LNS has caught up it then switches back to reading directly from the buffer in the SGA.

The log buffer ratio is tracked via the view X$LOGBUF_READHIST a low hit ratio indicates that the LNS is reading from the log file instead of the log buffer, if this happens try increasing the log buffer size.

The drawback with ASYNC is the increased potential for data loss, if a failure destroys the primary database before the transport lag is reduced to zero, any committed transactions that are part of the transport lag are lost. So again make sure that the network bandwidth is adequate and that you get the lowest latency possible.

 

 

The protection modes are:

Maximum Performance: This mode requires ASYNC redo transport so that the LGWR process never waits for acknowledgment from the standby database, also note that Oracle no longer recommends the ARCH transport method in previous releases is used for maximum performance.

Note that you will probably lose data if the primary fails and full synchronization has not occurred, the amount of data loss is dependant on how far behind the standby is is processing the redo.

This is the default mode.

Maximum Availability: Its first priority is to be available its second priority is zero loss protection, thus it requires the SYNC redo transport. In the event that the standby server is unavailable the primary will wait the specified time in the NET_TIMEOUT parameter before giving up on the standby server and allowing the primary to continue to process. Once the connection has been re-established the primary will automatically resynchronize the standby database.

When the NET_TIMEOUT expires the LGWR process disconnects from the LNS process, acknowledges the commit and proceeds without the standby, processing continues until the current ORL is complete and the LGWR cycles into a new ORL, a new LNS process is started and an attempt to connect to the standby server is made, if it succeeds the new ORL is sent as normal, if not then LGWR disconnects again until the next log switch, the whole process keeps repeating at every log switch, hopefully the standby database will become available at some point in time. Also in the background if you remember if any archive logs have been created during this time the ARCH process will continually ping the standby database waiting until it come online.

You might have noticed there is a potential loss of data if the primary goes down and the standby database has also been down for a period of time and here has been no resynchronization, this is similar to Maximum Performance but you do give the standby server a chance to respond using the timeout.

Maximum Protection: The priority for this mode is data protection, even to the point that it will affect the primary database. This mode uses the SYNC redo transport and the primary will not issue a commit acknowledgment to the application unless it receives an acknowledgment from at least one standby database, basically the primary will stall and eventually abort preventing any unprotected commits from occurring. This guarantees complete data protection, in this setup it is advised to have two separate standby databases at different locations with no Single Point Of Failures (SPOF’s), they should not use the same network infrastructure as this would be a SPOF.

32. What happens during a switchover?

  1. Notifies the primary database that a switchover is about to occur
  2. Disconnect all users from the primary database
  3. Generate a special redo record that signals the End of Redo (EOR)
  4. Converts the primary database into a standby database
  5. Once the standby database applies the final EOR record, guaranteeing that no data loss has been lost, converts the standby database into the primary database.

The new standby database (old primary) starts to receive the redo records and continues process until we switch back again. It is important to remember that both databases receive the EOR record so both databases know the next redo that will be received.

 

34. Difference Between full backup and Level-0 backup

A level 0 incremental backup is physically identical to a full backup. The only difference is that the level 0 backup can be used as the parent for a level 1 backup, but a full backup will never be used as the parent for a level 1 backup

A level 0 incremental backup is physically identical to a full backup. The only difference is that the level 0 backup is recorded as an incremental backup in the RMAN repository, so it can be used as the parent for a level 1 backup. A full backup can never be part incremental backup strategy i.e it cannot be the parent for a subsequent incremental backup.

35. What is the advantage of datapump over traditional export, import?

1) Impdp/Expdp has self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2) Data Pump represents metadata in the dump file set as XML documents rather than as DDL commands.

3) Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance. Data Pump operations can take advantage of the server’s parallel processes to read or write multiple data streams simultaneously.

4) In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior.

5)Expdp/Impdp access files on the server rather than on the client.

6) Expdp/Impdp operates on a group of files called a dump file set rather than on a single sequential dump file.

7) When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

8) Expdp/Impdp consumes more undo tablespace than original Export and Import.

 

9) If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

10) Datapump has a very powerful interactive command-line mode which allows the user to monitor and control Data Pump Export and Import operations. Datapump allows you to disconnect and reconnect to the session.

Because Data Pump jobs run entirely on the server, you can start an export or import job, detach from it, and later reconnect to the job to monitor its progress.

11) Another neat development is the introduction of running impdp over network link (DB link). This feature totally eliminate the need of having a dedicated filesystem to store expdp dumps. It does the refresh over DB link.

12) In addition, REMAP_***** parameter in impdp is a bliss. Original exp do not have that and it was a pain to change tablespace for a schema if you are duplicating it as new one.

13) impdp vs imp : impdp provides a good option while replacing the data using table_exists_action.

14) The real beauty of Data Pump is NETWORK_LINK that allows you to import data directly into the schema of the remote user without accessing directory.

 

36. Difference between hot backup and rman bacukup?

Hot backup

  • User managed backup.
  • Everything we have to do manually.
  • Full allocated blocks copy.
  • More redo generation.
  • We cannot get corrupted block information

RMAN Backup

  • Server Managed backup.
  • Everything done by Oracle server.
  • Only changed block will copy.
  • We can do parallelized.
  • We can get corrupted block information.
  • Less redo generation.

 

37. What is the difference between shared and dedicated server?

A dedicated server process, which services only one user process

A shared server process, which can service multiple user processes

Difference between dedicated and shared server configuration:

In order to communicate with oracle database, oracle users need a program such as SQL *Plus which can issue SQL statements and few processes which can execute these SQL statements. These processes are divided into User Process, Server Process and Background Processes. User process runs user application like SQL *Plus. Server process manages oracle user process’s requests. Server process executes SQL statements and returns result to user process. Background processes are the core of oracle database which handle over all database operations.

Dedicated server connection:

A dedicated process has one to one relationship with user process. The user connected with dedicated server connection has the dedicated resource that is whether or not the user is doing work the connection remains allocated to that user. Thus if your user is not working and if there is less network resource then this user is blocking another user to connect or to execute the user request by making this user works queued. This connection is preferable when you have lot of resource or to some administrative user like “SYS” , “SYS” for administrative works and for batch jobs,

Shared server connection:

Shared server architecture consists of Listener Process, Dispatcher Process, Request Queue, Shared server process and Response Queue. Network Listener process listens the user process request. If user process request requires a dedicated server process, listener process starts a dedicated server process. If the request can be assigned to a shared server process, then the request is forwarded to dispatcher process. Shared server configuration requires at least on dispatcher process. Dispatcher process places the request on request queue. Request Queue is created in SGA and shared by all dispatcher processes. On of free Shared server process picks up the request from request queue. After processing the request, shared server process places the result in response queue. Each dispatcher process has its own response queue in SGA. Dispatcher knows about the user process which placed the request, so the response is returned back to user process.

When there is shared server the user connection is shared, that is user is connecting by dispatchers and when a user is idle his resource can be used by other users, thus lessen the load on system, this is more likely in a environment where user is mostly idle, like in a order entry system.

 

 

 

38. How to configure shared server in oracle database?

SHARED_SERVERS Initialization Parameter : SHARED_SERVERS is the only parameter necessary to enable a shared server. Its value specifies that how many shared server are to be started.

Use following SQL statement to enable shared server.

Alter System Set SHARED_SERVERS = 5;

This command will start five shared servers, oracle database will also start a dispatcher process. Remember we did not configure any dispatcher. Oracle database will create dispatcher on its own.

To configure Shared server at database startup, use following setting in initialization parameter file

SHARED_SERVERS=5;

You can query V$SHARED_SERVER view to confirm changes.

MAX_SHARED_SERVERS Initialization Parameter

Oracle database keeps number of shared servers equal to SHARED_SERVERS parameter value. So SHARED_SERVER defines lower limit of shared server processes. PMON starts as many Shared servers as needed. But you can restrict upper limit by using MAX_SHARED_SERVERS.

For example, use following setting in parameter file to restrict maximum shared Server to 10;

MAX_SHARED_SERVERS= 10;

SHARED_SERVER_SESSIONS Initialization Parameter

Use this parameter to limit maximum concurrent shares server sessions. Value of this parameter should be less than SESSION initialization parameter. You may use this parameter to make sure that resources are always available for few dedicated server sessions.

DISPATCHERS Initialization Parameter:

DISPATCHERS parameter is used to create dispatcher processes. For example, use following setting in parameter file to create Dispatcher.

Use following setting to create 3 dispatchers

DISPATCHERS=”(PROTOCOL=TCP)(DISPATCHERS=3)”

Query V$DISPATCHER view to find out details about currently running dispatchers.

 

 

Client Side Setting for Shared Server:

When oracle database has been configured as Shares server, by default all connection will be entertained by Shared Server. Net service on client can be modified to use a shared Server or Dedicated server. Use SERVER=SHARED or SERVER=DEDICATED

 

39. What is the difference between undo tablespace and temporary tablespace?

Temporary tablespaces:

Temporary tablespaces are logical units within an Oracle database that are used for sort operations.

For e.g.: if you join two large tables, and Oracle cannot do the sort in memory (see SORT_AREA_SIZE initialization parameter), space will be allocated in a temporary tablespace for doing the sort operation. Other SQL operations that might require disk sorting are: CREATE INDEX, ANALYZE, Select DISTINCT, ORDER BY, GROUP BY, UNION, INTERSECT, MINUS, Sort-Merge joins, etc.

Oracle also uses temporary tablespaces to store global temporary tables.

In addition to SELECT statements with clauses like DISTINCT, ORDER BY, UNION, INTERSECT etc, Oracle might use temporary tablespace for creating indexes and analyzing objects.

The DBA should assign a temporary tablespace to each user in the database to prevent them from allocating sort space in the SYSTEM tablespace.

It is important to note that TEMPORARY TABLESPACES have volatile or non-persistent data. You don’t need to include temporary tablespaces in your backup policy and can just create them after you have performed a recovery.

Undo tablespaces:

Undo tablespaces contain before image of data, that can be used to rollback a particular transaction.

Broadly, undo tablespaces are used for

    Roll back transactions when a ROLLBACK statement is issued

  • Recover the database
  • Provide read consistency
  • Analyze data as of an earlier point in time by using Flashback Query
  • Recover from logical corruptions using Flashback features

 

 

40. What is PCT free and PCT used block?

The PCTFREE parameter specifies the percentage of space in each data block that is reserved for growth resulting from updates of rows in that data block.

This parameter has a default value of 10 percent. For example, the value of the PCTFREE parameter is specified as 20 in a CREATE TABLE statement. This indicates that inserts to the block should stop as soon as free space drops to 20 percent or less. The free space thereafter can only be used for updates.

The PCTUSED parameter represents the minimum percentage of the used space that the Oracle server tries to maintain for each data block of the table. This parameter has a default value of 40 percent.

When a data block is filled to the limit determined by the value of the PCTFREE parameter, Oracle considers the block unavailable for the insertion of new rows. The block is unavailable for the insertion of new rows until the percentage filled by the data of that block falls below the value of the PCTUSED parameter. Until the percentage of the block falls below the value of the PCTUSED parameter, Oracle uses the free space of the data block only for updating the rows contained in the data block.

For example, if PCTUSED is defined as 40 percent, the block is reused for inserts as soon as utilization drops to less than 40 percent. Inserts continue until utilization reaches 80 percent and the cycle repeats.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

41. What is PCT free and PCT used block?

The PCTFREE parameter specifies the percentage of space in each data block that is reserved for growth resulting from updates of rows in that data block.

This parameter has a default value of 10 percent. For example, the value of the PCTFREE parameter is specified as 20 in a CREATE TABLE statement. This indicates that inserts to the block should stop as soon as free space drops to 20 percent or less. The free space thereafter can only be used for updates.

The PCTUSED parameter represents the minimum percentage of the used space that the Oracle server tries to maintain for each data block of the table. This parameter has a default value of 40 percent.

When a data block is filled to the limit determined by the value of the PCTFREE parameter, Oracle considers the block unavailable for the insertion of new rows. The block is unavailable for the insertion of new rows until the percentage filled by the data of that block falls below the value of the PCTUSED parameter. Until the percentage of the block falls below the value of the PCTUSED parameter, Oracle uses the free space of the data block only for updating the rows contained in the data block.

For example, if PCTUSED is defined as 40 percent, the block is reused for inserts as soon as utilization drops to less than 40 percent. Inserts continue until utilization reaches 80 percent and the cycle repeats.

FREE LIST:

Each segment has one or more freelist (in the segment header) that keeps track of data blocks under the high-water mark.

When records are inserted into the segment, Oracle examines the freelist to determine which data blocks have available space to store the new rows. When rows are deleted (or updated), and the amount of free space in the data block falls below a threshold value (PCTUSED), the data block will be returned to the freelist.

 

42. What happen during index rebuilt?

When you use an online index rebuild Oracle has to create a log of all changes you make to the table while the rebuild is running. Once the log is set up, Oracle creates a new index using a read-consistent copy of the data in the table, then copies the logged changes into this index, and finishes off by tidying up the data dictionary, dropping temporary segments, and dropping the log.

This process locks the table twice, by the way, once while getting started and creating the log, and then again when finishing off and dopping the log. If you read Richard Foote’s blog regularly you will know that these locks are only held for a short time but can be hard to acquire and may lead to a randomly long delay in the rebuild.

For my purposes, though, the critical feature is  the object that holds the logging information. This is created as an index organized table (IOT) with a name like sys_journal_NNNNN where NNNNN is the object_id of the index you are trying to rebuild. (You can check this by enabling sql_trace just before doing a rebuild.)

Nice post!! One of the problems with online rebuild is that if you are trying to do online rebuild for table with high dml activity and happen to cancel the online rebuild (Ctrl+C),then the Journal table does not get dropped as Smon is unable to clear it (due to high DML on base table). Due to this you won’t be able to drop or rebuild index (not online) and get a error message that “online rebuild in progress”. You will have to wait till smon clears it. . Slightly off topic from your post but wanted to highlight to readers that they should use online rebuild option judiciously and try never to cancel it.

Why rebuilding an index has been so beneficial is because the index is now so much smaller after the rebuild that the overheads of reading the index have substantially reduced. If the index is smaller, it means one can read the same amount of index related data with less I/Os. Less I/Os means better performance.

During the creation or rebuilding of an index online, Oracle still requires two associated table locks on the base table at the start and end of indexing process. If there’s an active transaction on the base table at the time one of these locks is required, the indexing process will still hang as its done previously until all these prior active transactions have completed. No change so far.

However, if the indexing process has been locked out and subsequent transactions relating to the base table start afterwards, these transactions will no longer in turn be locked out by the indexing table locks and are able to complete successfully. The indexing process no longer impacts other concurrent transactions on the base table, it will be the only process potentially left hanging while waiting to acquire its associated lock resource.

This means it may not be quite so “risky” to urgently introduce that new index or rebuild that troublesome index during core business hours due to the reduced locking implications introduced in 11g.

 

 

 

 

 

 

 

 

43. What is the content of sqlnet.ora, tnsnames.ora and listner.ora file?

 

In its most basic form, Oracle uses three files (listener.ora, tnsnames.ora & sqlnet.ora) for network configuration.

 

LISTENER.ORA

The listerner.ora file contains server side network configuration parameters. It can be found in the ORACLE_HOME/network/admin directory on the server. Here is an example of a listener.ora file from Windows 2000:

 

# LISTENER.ORA Network Configuration File: C:\Oracle\817\network\admin\listener.ora

# Generated by Oracle configuration tools.

 

LISTENER =

(DESCRIPTION_LIST =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))

)

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))

)

)

(DESCRIPTION =

(PROTOCOL_STACK =

(PRESENTATION = GIOP)

(SESSION = RAW)

)

(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 2481))

)

)

 

SID_LIST_LISTENER =

(SID_LIST =

(SID_DESC =

(SID_NAME = PLSExtProc)

(ORACLE_HOME = C:\Oracle\817)

(PROGRAM = extproc)

)

(SID_DESC =

(GLOBAL_DBNAME = ORCL.WORLD)

(ORACLE_HOME = C:\Oracle\817)

(SID_NAME = ORCL)

)

)

 

After the listener.ora file is amended the listener should be restarted or reloaded to allow the new configuation to take effect:

 

C:> lsnrctl stop

C:> lsnrctl start

 

C:> lsnrctl reload

 

TNSNAMES.ORA

The tnsnames.ora file contains client side network configuration parameters. It can be found in the ORACLE_HOME/network/admin or ORACLE_HOME/net80/admin directory on the client. This file will also be present on the server if client style connections are used on the server itself. Basically the listener.ora file is used on a server, and it helps make the connections between client requests and the database instance(s) on the server. The tnsnames.ora file is used on the client, and it points to a particular entry in a listener.ora file on a server.

 

Here is an example of a tnsnames.ora file from Windows 2000:

 

# TNSNAMES.ORA Network Configuration File: C:\Oracle\817\network\admin\tnsnames.ora

# Generated by Oracle configuration tools.

 

ORCL.WORLD =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))

)

(CONNECT_DATA =

(SERVICE_NAME = ORCL.WORLD)

)

)

 

INST1_HTTP.WORLD =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))

)

(CONNECT_DATA =

(SERVER = SHARED)

(SERVICE_NAME = MODOSE)

(PRESENTATION = http://admin)

)

)

 

EXTPROC_CONNECTION_DATA.WORLD =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))

)

(CONNECT_DATA =

(SID = PLSExtProc)

(PRESENTATION = RO)

)

)

 

SQLNET.ORA

The sqlnet.ora file contains client side network configuration parameters. It can be found in the ORACLE_HOME/network/admin or ORACLE_HOME/net80/admin directory on the client. This file will also be present on the server if client style connections are used on the server itself. Here is an example of an sqlnet.ora file from Windows 2000:

 

# SQLNET.ORA Network Configuration File: C:\Oracle\817\network\admin\sqlnet.ora

# Generated by Oracle configuration tools.

 

SQLNET.AUTHENTICATION_SERVICES= (NTS)

 

NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)

NAMES.DEFAULT_DOMAIN = WORLD

 

If a tnsnames.ora file has to work properly , then we need the sqlnet.ora file. The sqlnet.ora file
contains some information which the user process(the user connection) needs ,to find exactly where
a database server is located. So the user process normally reads the tnsnames.ora file and then the
sqlnet.ora file and thus will find the complete database server information, database name etc and
will talk to the listener which is present on the database server. The listener will then will see that a
conenction is opened to the database.

 

44. Does dictionary cache contain dynamic view?

“Data dictionary” in oracle database is the component which keeps information about database. Data Dictionary is collection of read-only tables. These tables are maintained by oracle database. Data inside these tables is modified when DDL commands are issued. This data is kept in encrypted form and should not be modified by any user. Data dictionary holds detailed information about Schema objects structure, space allocation, user privileges, auditing information and default vales of columns etc.

Structure of Data Dictionary Cache:

Data dictionary is organized into Base Tables and User-accessible view. These tables and views are stored in SYSTEM tablespace and owned by oracle user sys. As system tablespace is always online, so data dictionary is always available.

As the data in Base Tables is in encrypted format so users cannot and should not directly access information from these tables. Oracle has provided certain view to fetch information. These views decode encrypted data and provide us useful information. These views change with every new oracle release.

Oracle also contains a set of dynamic performance views. These views are continuously updated and contain latest information about database. These views are created in RAM and available even if the database in not open. The information from dynamic performance view is frequently accessed during database recovery process. For example, if you open database in no-mount stage, at this stage DBA_TBLESPACES view is not available as database is not open but you can query V$TABLESPACE dynamic view to get information about tablespaces.

Dynamic performance view start with V$ and GV$. While other Data Dictionary views start with USER, ALL or DBA prefix.

Oracle Dictionary Cache:

Dictionary Cache is place in Shared Pool which contains Data Dictionary. Oracle frequently requires Data Dictionary. Most parts of Data Dictionary are cached into Dictionary Cache. Oracle utilizes Dictionary Cache information in query parsing.

Dictionary cache is also called Row Cache as data inside Dictionary Cache is maintained into rows instead of buffer.

 

45. What is the differences between static and dynamic listener registration?

1) The relevant one is that – if you want to remotely startup the database, you need to have a statically registered connect string, otherwise – since the database isn’t up – it wouldn’t be registered.

2) Use dynamic registration for all application connects – all of them. Use static only if and when you need to remotely start the database over the network.

 

 

 

 

 

 

 

 

 

RMAN Concepts:

1. RMAN keeps a record of all target databases that are registered with catalog database.

2. When RMAN is invoked to backup or recover a database, it verifies the largest database with stored database if available in the RMAN Catalog.

3. If its matches RMAN establishes two server session with target database.

4. A channel is a remote procedural call to the target database using pl/sql interace to perform backup and recovery operations.

5. RMAN Compiles the command and executed it on the target database with the summary of actions. Backedup to these files are called backup pieces. To backup to tape drive/tape library we need the media manager.

6. Execution of every RMAN backup command produces a backup set that is a logical grouping of one more physical files called backup pieces.

7. RMAN Backup s going to be content backup not mirror image but only used blocks are backed up. Thus RMAN is smaller and much faster when compared to other backups.

Advantages:

1. Centralized backup and recovery and this recovery is across enterprise database.

2. Incremental backups –only those blocks which are changed since last backup

3. Corrupt block identification

4. Automated scheduled backups

5. Cross check – Once backup is over again checking the consistency of original files.

6. Can generate logs on backup and recovery.

7. Can report the summary information.

8. Can store preconfigured scripts which can be run at any time.

9. A log can specify to the backup file / files set which can be used even at the time of restore instead of filename.

10. Tablespace point in time recovery is possible .

11. Create duplicate database of production database for developers /QA people/

12. Multiple destinations can be mentioned.

13. Cleanup procedures can be implemented after performing RMAN successful backup. let’s get rid of obsolete archive log files.

Pre –requirements:

1. Every Target must be registered in the catalog database

2. In the catalog database a separate tablespace must be created to hold catalog schema owner.

3. A user should be created in the catalog database with connect, resource, recovery_catalog_owner privileges.

Block Change Tracking
As data blocks changes, the background process change tracking writer (CTWR) tracks all the changes made to the data block in a private area of memory. When we issue commit on these changes, the block change tracking information is copied to a shared area in large pool called the CTWR buffer. CTWR process writes the information from the CTWR RAM buffer to the change tracking file. Without BCT enabled or in cases when change tracking information cannot be used, RMAN has to read every block in the datafile during incremental backup. Each block contains last system change number (SCN) of its last modification. The block is copied only if its SCN is higher or equal to the base backup SCN. To be precise, the backup SCN is a checkpoint SCN that was made right before the backup. Since, usually, only handful of blocks is changed between incremental backups, RMAN does a lot of useless work reading the blocks not required for backup.
Block change tracking provides a way to identify the blocks required for backup without scanning the whole datafile. After that RMAN need only read blocks that are really required for this incremental backup.

How to Enable/Disable Block change tracking?
Enable
Syntax: SQL>Alter database enable block change tracking (syntax when OMF is enabled)
Or
Alter database enable block change tracking using file os_file_name;
Using os_file_name syntax allows you to define the location of the change tracking file on the OS or you can omit this using OMF (oracle Managed files)
By default the change tracking file is created as an Oracle Managed File (OMF) in the location pointed to by the DB_CREATE_FILE_DEST parameter.
Disable
SQL>alter database disable block change tracking;

How to monitor the status of block change tracking?
SQL>select filename, status, bytes from v$block_change_tracking

How can we view the size of CTWR dba buffer?
Select * from v$sgastat where name like ‘CTWR%’;

Incremental Backups using RMAN:

Incremental Backups: RMAN incremental backups back up only data file block that have been changed since last backup. One can make incremental backups at Database level, Tablespace level or datafile level.
The main theme of incremental backup is to back up only those data blocks that have been changed since previous backup.
The reasons for going incremental backup as a part of our backup strategy are:
1. To reduce the amount of time needed for taking backups.
2. To save network bandwidth when backing up over network.
3. For better backup performance.
4. To reduce backup size for Non-Archive log databases, instead of going with full database backup one can go with incremental backups.

Incremental Backup Algorithm:
Data is stored in the data blocks. Each data block in a data file contains system change number SCN, which is the SCN at which most recent changes are made to the block. During incremental backup RMAN reads the SCN of each data block in the input file and compares this with the checkpoint SCN of the parent incremental backup. If the SCN in the input data block Is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the block.

Level 0 and Level 1 Backups:
Level 0: Level 0 is just like a full backup which is the base of the subsequent incremental backups, copies all blocks containing data, backing the data file up into a backup set.
The main difference between the full backup and incremental level 0 backups is , full backup is never included in the incremental backup strategy.
Level 1: With reference to the previous level 0 backup or level 1 backup it backs up the modified data blocks.

Types of Level 1 incremental backups:
Differential Backup: Backus up all blocks changed after the most recent incremental backup at level 1 or 0
Cumulative backup: Backups up all blocks changed after the most recent incremental backup at level 0
By default incremental backups are differential.
RMAN> Backup Incremental level 1 Database; (example of differential backup)
RMAN> Backup Incremental level 1 cumulative database; (blocks changed since level 0- Cumulative Incremental Backups.

Incremental Backup Strategy:
Choose a backup schema according to an acceptable MTTR (mean time to recover). For example, you can implement a three –level backup schema so that a full or level 0 backup is taken monthly, a cumulative level 1 is taken weekly, and a differential level 1 is taken daily.

How often we can take full or incremental backups?
When deciding how often we can take full or incremental backups, a good rule of thumb is to take a new level 0 backup whenever 50% or more of the data has changed. If the rate of change to your database is predictable, then you can observe the size of your incremental backups to determine new level 0 is appropriate.
The following query will help you to determine the rate of changed blocks.

SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0
AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;

Making Incremental Backups : Backup Incremental
After starting RMAN, run the BACKUP INCREMENTAL command at the RMAN prompt. This example makes a level 0 incremental backup of the database:
BACKUP INCREMENTAL LEVEL 0 DATABASE;
This example makes a differential level 1 backup of the SYSTEM tablespace and datafile tools01.dbf. It will only back up those data blocks changed since the most recent level 1 or level 0 backup:
BACKUP INCREMENTAL LEVEL 1
TABLESPACE SYSTEM
DATAFILE ‘ora_home/oradata/trgt/tools01.dbf’;
This example makes a cumulative level 1 backup of the tablespace users, backing up all blocks changed since the most recent level 0 backup.
BACKUP INCREMENTAL LEVEL = 1 CUMULATIVE
TABLESPACE users;

 

 

 

ASM :

Automatic Storage Management (ASM) is oracle’s logical volume manager, it uses OMF (Oracle Managed Files) to name and locate the database files. It can use raw disks, filesystems or files which can be made to look like disks as long as the device is raw. ASM uses its own database instance to manage the disks, it has its own processes and pfile or spfile, it uses ASM disk groups to manage disks as one logical unit.

Cluster Synchronization Services Requirements for ASM

The Cluster Synchronization Services (CSS) daemon provides cluster services for ASM, communication between the ASM and database instances, and other essential services. When DBCA creates a database, the CSS daemon is usually started and configured to start upon restart. If DBCA created the database, then you must ensure that the CSS daemon is running before you start the ASM instance.

ASM registers its name and disks with the RDBMS via the cluster synchronization service (CSS). This is why the oracle cluster services must be running, even if the node and instance is not clustered. The ASM must be in mount mode in order for a RDBMS to use it and you only require the instance type in the parameter file.

ASM must be in mount mode in order for a RDBMS to use it and you only require the instance type in the parameter file.

The INSTANCE_TYPE initialization parameter is the only required parameter in the ASM instance parameter file. The ASM* parameters use suitable defaults for most environments. You cannot use parameters with names that are prefixed with ASM* in database instance parameter files.

 

ASM Parameters:

ASM_DISKGROUPS

The ASM_DISKGROUPS initialization parameter specifies a list of the names of disk groups that an ASM instance mounts at startup. Oracle ignores the value that you set for ASM_DISKGROUPS when you specify the NOMOUNT option at startup or when you issue the ALTER DISKGROUP ALL MOUNT statement. The default value of the ASM_DISKGROUPS parameter is a NULL string. If the parameter value is NULL or is not specified, then ASM does not mount any disk groups.

SQL> ALTER SYSTEM SET ASM_DISKGROUPS = ‘CONTROLFILE, DATAFILE, LOGFILE, STANDBY’;

ASM_DISKSTRING

The ASM_DISKSTRING initialization parameter specifies a comma-delimited list of strings that limits the set of disks that an ASM instance discovers. The discovery strings can include wildcard characters. Only disks that match one of the strings are discovered. The same disk cannot be discovered twice.

The default value of the ASM_DISKSTRING parameter is a NULL string. A NULL value causes ASM to search a default path for all disks in the system to which the ASM instance has read and write access. The default search path is platform-specific. Refer to your operating system specific installation guide for more information about the default search path.

ASM cannot use a disk unless all of the ASM instances in the cluster can discover the disk through one of their own discovery strings. The names do not need to be the same on every node, but all disks must be discoverable by all of the nodes in the cluster. This may require dynamically changing the initialization parameter to enable adding new storage.

 

ASM_POWER_LIMIT

The ASM_POWER_LIMIT initialization parameter specifies the default power for disk rebalancing. The default value is 1 and the range of allowable values is 0 to 11 inclusive. A value of 0 disables rebalancing. Higher numeric values enable the rebalancing operation to complete more quickly, but might result in higher I/O overhead.

ASM_PREFERRED_READ_FAILURE_GROUPS

The ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter value is a comma-delimited list of strings that specifies the failure groups that should be preferentially read by the given instance. This parameter is generally used only for clustered ASM instances and its value can be different on different nodes.

The ASM_PREFERRED_READ_FAILURE_GROUPS parameter setting is instance specific. This parameter is only valid for clustered ASM instances and the default value is NULL.

DB_CACHE_SIZE

You do not need to set a value for the DB_CACHE_SIZE initialization parameter if you use automatic memory management.

The setting for the DB_CACHE_SIZE parameter determines the size of the buffer cache. This buffer cache is used to store metadata blocks. The default value for this parameter is suitable for most environments.

 

 

 

 

DIAGNOSTIC_DEST

The DIAGNOSTIC_DEST initialization parameter specifies the directory where diagnostics for an instance are located. The value for an ASM instance is of the form:

diagnostic_dest/diag/asm/db_name/instance_name

For an ASM instance, db_name defaults to +asm.

 

INSTANCE_TYPE

The INSTANCE_TYPE initialization parameter must be set to ASM for an ASM instance. This is a required parameter and cannot be modified. The following is an example of the INSTANCE_TYPE parameter in the initialization file:

INSTANCE_TYPE = ASM

LARGE_POOL_SIZE

You do not need to set a value for the LARGE_POOL_SIZE initialization parameter if you use automatic memory management.

The setting for the LARGE_POOL_SIZE parameter is used for large allocations. The default value for this parameter is suitable for most environments.

PROCESSES

You do not need to set a value for the PROCESSES initialization parameter if you use automatic memory management.

The PROCESSES initialization parameter affects ASM, but generally you do not need to modify the setting. The default value provided is usually suitable.

 

REMOTE_LOGIN_PASSWORDFILE

The REMOTE_LOGIN_PASSWORDFILE initialization parameter specifies whether the ASM instance checks for a password file. This parameter operates the same for ASM and database instances.

 

SHARED_POOL_SIZE

You do not need to set a value for the SHARED_POOL_SIZE initialization parameter if you use automatic memory management.

The setting for the SHARED_POOL_SIZE parameter determines the amount of memory required to manage the instance. The setting for this parameter is also used to determine the amount of space that is allocated for extent storage. The default value for this parameter is suitable for most environments.

 

The main parameters in the instance parameter file will be:

  • instance_type – you have two types RDBMS or ASM
  • instance_name – the name of the ASM instance
  • asm_power_limit – maximum speed of rebalancing disks, default is 1 and the range is 1 – 11 (11 being the fastest)
  • asm_diskstring – this is the location were oracle will look for disk discovery
  • asm_diskgroups – diskgroups that will be mounted automatically when the ASM instance is started.

 

 

Disk Group Attributes

 

  • AU_SIZE

The contents of ASM files are stored in a disk group as a set, or collection, of data extents that are stored on individual disks within disk groups. Each extent resides on an individual disk. Extents consist of one or more allocation units (AU). To accommodate increasingly larger files, ASM uses variable size extents.

The ASM coarse striping is always equal to the disk group AU size, but fine striping size always remains 128KB in any configuration (not shown in the figure). The AU size is determined at creation time with the allocation unit size (AU_SIZE) disk group attribute. The values can be 1, 2, 4, 8, 16, 32, and 64 MB.

 

  • COMPATIBLE.ASM

The value for the disk group COMPATIBLE.ASM attribute determines the minimum software version for an ASM instance that uses the disk group. This setting also determines the format of the data structures for the ASM metadata on the disk. The format of the file contents is determined by the database instance. For ASM in Oracle Database 11g, 10.1 is the default setting for the COMPATIBLE.ASM attribute.

 

  • COMPATIBLE.RDBMS

The value for the disk group COMPATIBLE.RDBMS attribute determines the minimum COMPATIBLE database initialization parameter setting for any database instance that uses the disk group. For example, if the database COMPATIBLE initialization parameter is set to 11.1.0, then COMPATIBLE.RDBMS can be set to any value between 10.1 and 11.1 inclusively. For ASM in Oracle Database 11g, 10.1 is the default setting for the COMPATIBLE.RDBMS attribute

 

  • DISK_REPAIR_TIME

Restoring the redundancy of an ASM disk group after a transient disk path failure can be time consuming. This is especially true if the recovery process requires rebuilding an entire ASM failure group. ASM fast mirror resync significantly reduces the time to resynchronize a failed disk in such situations. When you replace the failed disk, ASM can quickly resynchronize the ASM disk extents.

The default DISK_REPAIR_TIME attribute value is an estimate that should be adequate for most environments. However, make sure that the attribute value is set to the amount of time that you think is necessary in your environment to fix any transient disk error and that you are willing to tolerate reduced data redundancy.

ALTER DISKGROUP dg01 SET ATTRIBUTE ‘disk_repair_time’ = ‘4.5h’

ALTER DISKGROUP dg01 SET ATTRIBUTE ‘disk_repair_time’ = ‘270m’

 

 

ASM Background Processes

The following background processes are an integral part of Automatic Storage Management:

  • ARBn performs the actual rebalance data extent movements in an Automatic Storage Management instance. There can be many of these processes running at a time, named ARB0, ARB1, and so on.
  •  
  • ASMB runs in a database instance that is using an ASM disk group. ASMB communicates with the ASM instance, managing storage and providing statistics. ASMB can also run in the ASM instance. ASMB runs in ASM instances when the ASMCMD cp command runs or when the database instance first starts if the SPFILE is stored in ASM.
  •  
  • GMON maintains disk membership in ASM disk groups.
  •  
  • MARK marks ASM allocation units as stale following a missed write to an offline disk. This essentially tracks which extents require resync for offline disks.
  •  
  • RBAL runs in both database and ASM instances. In the database instance, it does a global open of ASM disks. In an ASM instance, it also coordinates rebalance activity for disk groups.

 

 

 

WHY ASM ?

  • Adding or removing storage to/from ASM is very easy and does not require careful planning (as is the case with volume managers and file systems). After adding storage, ASM will automatically “rebalance” storage so all disks will be utilized equally. This again increases performance.
  • Provides automatic load balancing over all the available disks, thus reducing hot spots in the file system.
  • Prevents fragmentation of disks, so you don’t need to manually relocate data to tune I/O performance
  • Uses redundancy features available in intelligent storage arrays
  • ASM provides stripping and mirroring (fine and coarse gain – see below)
  • ASM works on all major operating systems so it is platform independent.
  • Using disk group makes configuration easier, as files are placed into disk groups
  • The storage system can store all types of database files.
  • ASM is free!!!!!!!!!!!!!
  • Using disk group makes configuration easier, as files are placed into disk groups
  • Improves performance.

 

There are three forms of Mirroring :

  • External redundancy – doesn’t have failure groups and thus is effectively a no-mirroring strategy
  • Normal redundancy – provides two-way mirroring of all extents in a disk group, which result in two failure groups
  • High redundancyprovides three-way mirroring of all extents in a disk group, which result in three failure groups

 

 

 

  • Not Managed by ASM – Oracle binaries, alert log, trace files, init.ora or password file
  • Managed by ASM Datafiles, SPFILES, redo log files, archived log files, RMAN backup set / image copies, flash recovery area.

 

RMAN backup

RMAN is the only way to backup ASM disks.

 

 

ASM Stripping

ASM stripes files across all the disks within the disk group thus increasing performance, each stripe is called an ‘allocation unit’. ASM offers two types of stripping which is dependent on the type of database file

Coarse Stripping

used for datafile, archive logs (1MB stripes)

Fine Stripping

used for online redo logs, controlfile, flashback files(128KB stripes)

 

ASM file management has a number of good benefits over normal 3rd party LVM’s

  • performance
  • redundancy
  • ease of management
  • security

 

 

 

 

 

 

 

 

 

 

 

 

 

Performance Issues :

 

1.  What are the wait events?Give any 5 wait events functionality?

a.db file scattered read: full table scan scenario  :

b.log file sync: when the logfile size is small and frequent commit are coming. so we should give commit after 10,000 rows update.:

c. db_file sequential read…..while index scan is going on when more physical for index. we check systat for physical IO, then we use caluse like where :

Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.

d.buffer busy wait : when there is less buffer in buffer cache.: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH

e.latch: memory lock….when there space left between two words.

f. log buffer spaces : Increase LOG_BUFFER parameter or move log files to faster disks

 

  1. 2.       What are the contents of tkprof? give one description?

The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can user TKPROF to get more information.

TKPROF reports each statement executed with the resources it has consumed, the number of times it was called, and the number of rows which it processed. This information lets you easily locate those statements that are using the greatest resource. With experience or with baselines available, you can assess whether the resources used are reasonable given the work done

count    = number of times OCI procedure was executed

cpu      = cpu time in seconds executing

elapsed  = elapsed time in seconds executing

disk     = number of physical reads of buffers from disk

query    = number of buffers gotten for consistent read

current  = number of buffers gotten in current mode (usually for update)

rows     = number of rows processed by the fetch or execute call

Things to look out for in the trace include:

•When tracing lots of statements at once, such as batch processes, you can quickly discard those statements which have an acceptable cpu times. It’s often better to focus on those statements that are taking most of the cpu time.

•Inefficient statements are mostly associated with a high number of block visits. The query column indicates block visits for read consistency, including all query and subquery processing. The current column indicates visits not related to read consistency, including segment headers and blocks that are going to be updated.

•The number of blocks read from disk is shown in the disk column. Since disk reads are slower than memory reads you would expect this value to be significantly lower than the sum of the query and current columns. If it is not you may have an issue with your buffer cache.

•Locking problems and inefficient PL/SQL loops may lead to high cpu/elapsed values even when block visits are low.

•Multiple parse calls for a single statement imply a library cache issue.

 

  1. 3.       What are all the steps you will do when a query is too slow?

We will verify if it was running fine before. From when it started running slow.

1. check is there any database related changes happened.(like any new PSU applied or any upgrade or any invalid objects).

whether any database upgrades are done recently or any patches applied to higher version. Same queries with same data will run very slow in newer versions when upgrade from lower version. Then you might run with database CBO Bug. Contact Oracle support for more details regarding this bug.

2. Statistics are fine or not ?? check for any massive data loads done during last past night, are there any code changes, when was the last time the tables got analyzed, check the tables in the queries for proper indexes creation, check for tables, indexes and other objects for recent statistics, avoid using database functions, avoid using full table scan as much as possible, create partitions if necessary for faster access , use merge statements for huge updates and deletes, and so on. Check whether database statistics ran after database load. Simple way to run statistics is using DBMS_STAT.GATHER_TABLE_STATS .

3. I/O Issue:  Sometimes queries will be very slow due to more I/O from database to client.

Mechanism to avoid disk I/O and make the physical reads and writes more efficient

First get the largest amount of physical reads and logical reads and the simple way to get them are:

 

For Physical Reads:

Select disk_reads, sql_text from v$sqlarea where disk_reads > 10000 order by disk_reads desc;

For Logical Reads:

Select Buffer_gets, sql_text from v$sqlarea where buffer_gets > 200000 order by Buffer_gets desc;

4. Read Hit Ratio :

98.99

Hit ratio below 90-95% is signed as poor indexing;

Check the Shared Pool Size:

It is important to look at the shared_pool_size for proper sizing. With a greater amount of procedures, packages and triggers being utilized with Oracle, the shared_pool_size makes up a much greater portion of the Oracle SGA. This is the memory allocated for the library and data dictionary cache. If the shared_pool_size is set too low then you will not get the full advantage of your DB_BLOCK_BUFFERS.

How to find the data dictionary cache miss ratio?

Select sum(gets) “Gets”, sum(getmisses) “Misses”, (1-(sum(getmisses)/(sum(gets)+ sum(getmisses)))) * 100 “HitRate”

From gv$rowcache;

Gets          Misses           HitRate

11000           558            96.5

This is a good ratio does not require any further action.

How to find library Cache Hit Ratio?

select sum(pins) Executions, sum(pinhits) “Execution Hits”, ((sum(pinhits)/sum(pins)) * 100)phitrat, sum(reloads)Misses, ((Sum(pins)/(sum(pins) + sum(reloads))) * 100) hitrat from gv$librarycache;

 

2. check if any object lock in that particular table or session  blocking.

3. check the excution plan. wheteher its same as before ? is it using INDEX ?

4. Tune the sql query using sql adviser and see the oracle recommedations.

 

  1. What happen during one sql execution?

1. Syntax and sematic parse done.

2. Execution plan will be created by query optimizer and it read the dba_tab_statistics.

3. Execute

Simple Oracle transaction

1. User requests a connection to oracle

2. A new dedicated server process is started for the user

3. User executes a statement to insert data in to a table

4. Oracle checks the users privileges, it first checks the library cache (cache hit) for the information and if not found retrieve it from disk.

5. Check to see if the SQL statement has been parsed before (library cache) if it has then this is called a soft parse, otherwise the code has to be compiled a hard parse.

6. Oracle creates a private SQL area in the user’s session’s PGA

7. Oracle checks to see if the data is in the buffer cache, otherwise perform a read from the data file

8. Oracle will then apply row-level locks where needed to prevent others changing the row (select statements are still allowed on the row)

9. Oracle then writes the change vectors to the redo log buffer

10. Oracle then modifies the row in the data buffer cache

11. The user commits the transaction making it permanent; the row-level locks are released

12. The log writer process immediately writes out the changed data in the redo log buffers to the online redo log files, in other words the change is now recoverable.

13. Oracle informs the user process that the transaction was completed successfully

14. It may be sometime before the data buffer cache writes out the change to the data files.

Note: if the users transaction was an update then the before update row would have been written to the undo buffer cache, this would be used if the user rolls back the change of if another user run’s a select on that data before the new update was committed.

 

 

  1. How block transition happen during roll back between undo tablespace and database buffer cache?

Undo table space store before image

Oracle uses rollback segment information to undo any uncommitted transactions. Because rollback segments are stored in the database buffers.

The undo data which gets generated during transactions (example in the above scenario) will be stored for some time. If there is no space left in memory this undo data is flushed to undo segments in undo tablespace and it will reside in undo tablespace for a minimum time period defined by UNDO_RETENTION init.ora paramter. This undo data is used during rolling back a transaction. Please note that for rolling back a transaction redo log files are never used. Redo logs and archive logs are referred (or read) only during recovery of database. During all other time the data is written to redo logs files but never read from it. So for rolling back the transaction undo data saved in undo tablespace is required.

  1. Can we change the port number of 1251?If yes how?

Yes we can change it in listener.ora file.

netstat -lnap|grep 3872 (to see if the port is free or not).

  1. What happens during statics gather?

All the table and Index info will get updated to the dba_tab_statistics.

• Flush database monitoring info from SGA to dba_tab_modifications view

• Gather statistics for tables that cumulative modifications compared to the dba_tables.num_rows are more than Modification Threshold percents.

• Gather statistics for table that never analyzed (last_analyzed in NULL)

• Gather statistics for tables which does not have monitoring enabled and afterwards enable monitoring for them so we have data in dba_tab_modifications next time we Gather Statistics.

This feature monitors DML operations – insert/update/delete (and truncate) on monitored tables.

Table monitoring can be implemented on a table by executing the following statement:

“alter table TABLE_NAME monitoring”.

The advantage of this feature is when gathering statistics.

Based on the dba_tab_modifications view we can gather statistics for only tables that have more than X% changes (compared to num_rows in user_tables) since last gather statistics.

Actually, there is no reason for us to re-analyze tables that has no enough changes.

Where are the statistics stored?

Statistics are stored into the Oracle Data Dictionary, in tables owned by SYS. Views are created on these tables to retrieve data more easily.

These views are prefixed with DBA_ or ALL_ or USER_. For ease of reading, we will use DBA_% views, but ALL_% views or USER_% views could be used as well.

  1. CPU utilization is more in a server, What would be your action to reduce the utilization?

http://orachat.com/cpu-consumption-oracle/

1. Verify the observation of high CPU consumption on the affected server and collect top and vmstat data for a period of time (from 30 minutes up to a few hours).

2.Find which processes are using the most CPU.(using TOP command).

This command is useful because it shows at a glance the overall CPU statistics and the processes consuming the most CPU (ranked in descending order of CPU usage).

3.If the top CPU consumers are Oracle processes, then find out which database sessions correspond to these processes.(using PS command).

The ps command shows the accumulated CPU time for each process as well as the full name for the process associated with the process ID (PID).

 

  1. 9.       What is the db file sequential read and db file scatter read events?

db file sequential read waits: A sequential read wait occurs within an Oracle database when a single block is read. A single read is most commonly an index probe by ROWID into an individual table, or the access of an index block. Sequential reads are single-block reads, as opposed to multiblock (scattered) reads.

db file scattered read waits: Scattered read waits occurs when multiblock I/O is invoked. When the Oracle Database performs a full-table scan or sort operation, multiblock block read is automatically invoked.

Solutions to Physical Read Waits :

When we have identified the objects that experience the physical read waits, we can use Statspack to extract the SQL associated with the waits and take the following actions to correct the problem. These corrective actions are presented in the order in which they are most likely to be effective, and some may not apply to your environment.

Tune the SQL: This is the single most important factor in reducing disk waits. If an SQL statement can be tuned to reduce disk I/O (for example, by using an index to remove an unnecessary large-table full-table scan), then the amount of disk I/O and associated waits are dramatically reduced. Other SQL tuning might include:

Change table join order: For sequential read waits, the SQL may be tuned to change the order that the tables are joined (often using the ORDERED hint)

Change indexes: You can tune the SQL by adding function-based indexes or using an INDEX hint to make the SQL less I/O-intensive by using a more selective index.

Change table join methods: Often, nested loop joins have fewer I/O waits than hash joins, especially for sequential reads. You can change table join methods with SQL hints (USE_NL, for example). If you are not yet using Oracle9i with pga_aggregate_target, you can change the propensity for hash join by adjusting the hash_area_size parameter.

Re-schedule contentious SQL: After you have identified the regular trends of repeating disk waits, you can often reschedule the execution of the SQL at another time, thereby relieving the physical waits.

Re-analyze schema using dbms_stats – In some cases, stale or non-representative statistics generated by the dbms_utility.analyze_schema package can cause suboptimal SQL execution plans, resulting in unnecessary disk waits. The solution is to use the dbms_stats package to analyze your schema. Also, note that if column data values are skewed adding histograms may also be necessary.

Distribute disk I/O across more spindles: Sometimes disk channel contention is responsible for physical read waits, which will show up in your disk monitor tool (iostat, EMC Symmetrics Manager, and so on). If you experience disk waits as a result of hardware contention, you can stripe the offending objects across multiple disk spindles by reorganizing the object and using the MINEXTENTS and NEXT parameters to stripe the object across multiple data files on multiple disks or use volume manager or I/O subsystem provided striping mechanisms.

Use the KEEP pool: For reducing scattered reads, many experts recommend implementing the KEEP pool. In the Oracle Magazine article “Advanced Tuning with Statspack”, the author notes that that small-table full-table scans should be placed in the KEEP pool to reduce scattered read waits.

Increase the data buffer cache size: Obviously, the more data blocks we keep in RAM, the smaller the probability of read waits.

 

 

 

 

What is Cache Fusion? Synopsis & Overview :

Cache Fusion is the driving technology behind Oracle RAC that enable Applications to scale out on multiple servers/instances.

Cache Fusion/Synchronization enables concurrent/simultaneous transaction processing between all Instances using the Private Cluster Interconnect.

DB Blocks are synchronized, NOT mirrored = Faster performance.

Cache Fusion is very fast due to the fact that, disk writes are eliminated when other instances request blocks for updates.

Cache Fusion is a mechanism within Oracle RAC employs Shared Cache Architecture that fuses the in-memory data buffer cache across all nodes
into a single logical read-consistent buffer cache available to all instances.

Cache fusion refers to the protocol for sharing of instance buffer cache contents through fast inter-node messaging, resulting in a cluster-wide global buffer cache. There are two types of sharing involved: Read-Sharing, which refers to the mechanism used by a query to access the contents of another instance’s buffer cache, and Write-Sharing which refers to the mechanism by which an update operation accesses data in another instance’s cache. In the following subsections, we describe both kinds of sharing, followed by a brief description of inter-node messaging.

Cache Fusion Read-Sharing:

The mechanism for read-sharing in Cache Fusion exploits Oracle’s Consistent Read (CR) mechanism [1]. CR is a version-based concurrency control protocol which allows transactions to perform reads without acquiring any locks.
Each transaction in Oracle is associated with a snapshot time, known as the System Change Number (SCN), and the CR mechanism guarantees that any data read by a transaction is transactionally consistent as of that SCN.
When a transaction performs a change to a block, it stores the information required to undo that change in a rollback segment. When a transaction reads a block, the CR mechanism uses the stored undo information to create an earlier version of the block (a clone) which is consistent as of the reading transaction’s SCN. Clones are created in-memory and are never written to disk. A read operation therefore never needs to wait for another transaction to commit or abort since the CR mechanism automatically reconstructs the version of the block required by the operation. This mechanism therefore allows high concurrency for read operations.

In RAC, when Instance A requires read access to a block that is present in the buffer cache in Instance B, it requests a copy of the block from Instance B without requiring any change of resource ownership. Instance B creates a transactionally consistent CR clone of the block and ships it back to Instance A across the interconnect. Doing so has no impact on processes on Instance B since ownership of the block by Instance B is not affected.
Only when the requested block is not present in any instance’s cache is a disk I/O performed for the block.
However, the Read-Sharing protocol guarantees that once a block is read from disk by any instance in RAC, subsequent read accesses to that block from any other instance do not require disk I/O or inter-node ownership changes.

Cache Fusion Write-Sharing:

Write-sharing is handled by the GCS. When Instance A wishes to update a block it invokes the GCS to perform the necessary cache-coherency messaging to obtain a copy of the block.
If the GCS determines that the block is already in another instance (B’s) buffer cache, it notifies instance B that it must release its ownership of the block. Instance B then saves a copy of the block in its cache for future read access and releases ownership. Along with the message acknowledging the release, Instance B also ships its cached copy of the block to the requesting instance, even if that copy is dirty (i.e. contains changes that have not been written to disk). Thus, sharing dirty buffers between instances does not require any extra disk writes and reads.
Only if the block is not already present in any instance’s buffer cache must the requesting instance issue a disk read for the block.

An important benefit of the write-sharing fusion protocol is that after a write request is performed for a block, the instance that had the current copy can continue to perform read accesses on the block. Thus, in the above example,
Instance B can continue to perform read accesses on its cached image of the block even after it has relinquished ownership of the block and sent a copy over to Instance A. This is in contrast with typical shared-disk protocols in which a write request by a node invalidates all cached copies
and prevents any other nodes from accessing that block for the duration of the write.

Cache Fusion Read-Sharing and Write-sharing therefore ensure that in RAC, the total number of disk I/Os that need to be performed is comparable to the number of I/Os that would be performed by the same workload running on a single instance of Oracle with a buffer cache equal to the sum of the buffer caches of all the constituent RAC instances.

 

 

server DB Blocks are transferred in-memory from instance-to-instance cache over the Cluster InterConnect when requested after proper locking procedures are implemented.

Global Cache Service (GCS) is used for FAST instance-to-instance block buffer transfer and establishes/implements Cache Coherency = Never more than 3 hops.

Global Enqueue Service (GES), previously known as Dynamic Lock Manager (DLM) is used for block buffer locking.

Global Resource Directory (GRD) is used for keeping track of Block Buffer Location/Mode/Role information.

The Private Cluster InterConnect is used for block-transfers amongst instances to enable Cache Fusion.

 

Global Resource Directory(GRD) :

GCS & GES maintain the Global Resource Directory (GRD).

Internal Repository stored by all instances of the RAC Cluster.

Global Resource Directory (GRD) is used for keeping track of Data Structures, Block Buffer Location, Mode, Role, Inventory etc.

 

Global Cache Service (GCS) :

The backbone of Cache Fusion: Responsible for Cache Coherence.

Responsible for maintaining different block modes and transfer of data buffers amongst the instances.

Implemented by the Global Cache Service Processes(LMSn).

Lock Manager Server (LMS): Processes that are responsible for remote messaging.

LMSn: n = 0 – 9: Upto 10 LMS processes: Can be set with the Init parameter GCS_SERVER_PROCESSES.

Global Enqueue Service (GES):

Global Enqueue Service (GES), previously known as Dynamic Lock Manager (DLM) is responsible for locking mechanisms used in Cache Fusion.

LMON process responsible for cluster monitoring & management of global resources: Also know as Cluster Group Services.

LMD0 processes responsible for:

1.Management of resource requests from RAC instances.

2.Distributed Deadlock Detections.

3.Processing of Enqueued Requests.

4.Access Control to Global Enqueues.

 

Some useful Dynamic Performance Views for monitoring Cache Fusion:
1. v$gc_element
2. v$cache
3. v$instance_n cache_transfer
4. v$cr_block_server
5. v$cache_transfer
6. v$ges_blocking_enqueue
7. gv$file_cache_transfer
8. gv$temp_cache_transfer
9. gv$cache_transfer
10. gv$class_cache_transfer

 

Creating Physical Standby Using RMAN DUPLICATE( Or you can refer [ID 1075908.1] )

 

SN

List of Activities

1)

Make the necessary changes to the primary database.

 

 

a)      Enable force logging.

 

b)      Creating the password file if one does not exist.

 

c)       Create standby redologs.

 

d)       Modify the parameter file suitable for Dataguard.

 

 

2)

Ensure that the sql*net connectivity is working fine.

 

 

 

3)

Create the standby database over the network using the active(primary) database files.

 

a)       Create the password file

 

b)       Create the initialization parameter file for the standby database (auxiliary database)

 

c)        Create the necessary mount points or the folders for the database files

 

d)      Run the standby creation ON STANDBY by connecting to primary as target database.

 

 

4)

Check the log shipping and apply.

 

 

 

 

Type

Definition

Connect Statement

Channel Statement

Target

A database that is cloned from

connect target sys/password@target_sid

allocate channel t1 type disk;

Auxiliary

A new database which is cloned to

connect auxiliary sys/password@auxiliary_sid

allocate auxiliary channel a1 type disk

 

 

 

 

 

 

 

Here is primary and standby unique name which I have used in this document .

 

Primary unique name: example1

Standby unique name:example2

 

  1. 1.     Prepare the production database to be the primary database

 

a)      Ensure that the database is in archivelog mode .

 

SQL> select log_mode from v$database;

 

LOG_MODE

————

ARCHIVELOG

 

b)      Enable force logging

 

SQL> ALTER DATABASE FORCE LOGGING;

 

c)       Create standby redologs

 

The no of logfile in standby =no of logfile in primary + 1

 

alter database add standby logfile ‘path_of file/file_name.dbf’ size size_of_file;

 

example:

 

alter database add standby logfile  “full path along with filename” size 256M;

 

d)      Modify the primary initialization parameter for dataguard on primary

 

alter system set LOG_ARCHIVE_CONFIG=’DG_CONFIG=( example1, example2)’;

 

alter system set LOG_ARCHIVE_DEST_1=’LOCATION=/primary_VAH/oracle/admin/ example1/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME= example1;

 

alter system set LOG_ARCHIVE_DEST_2=’SERVICE=standby db_unique_name LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME= example2;

 

alter system set LOG_ARCHIVE_DEST_STATE_1=ENABLE;

 

alter system set FAL_SERVER= example2;

alter system set FAL_CLIENT= example1;

 

alter system set DB_FILE_NAME_CONVERT=’/standby_VAH/’,’/primary_VAH/’ scope=spfile;

 

alter system set LOG_FILE_NAME_CONVERT=’/standby_VAH/’,’/primary_VAH/’ scope=spfile;

 

 

 

 

 

 

 

 

 

  1. 2.     Ensure that the sql*net connectivity is working fine.

 

a)      Prepare listener file in standby side.

b)      Provide tns detail of primary in standby and vice versa.

c)       Check with the SQL*Net configuration using the following commands on the Primary AND Standby

 

tnsping Primary example1

tnsping  standby example2

 

  1. 3.     Create the standby database

 

a)      Copy the password file from the primary $ORACLE_HOME/dbs and rename it to the standby database name.

 

The username is required to be SYS and the password needs to be the same on the Primary and Standby.

The best practice for this is to copy the passwordfile as suggested.

The password file name must match the instance name/SID used at the standby site, not the DB_NAME.

 

b)  Create a initialization parameter with only one parameter DB_NAME.

 

DB_NAME=<same as primary>

DB_UNIQUE_NAME= example2

DB_BLOCK_SIZE=<same as primary>

 

d)       Create the necessary directories in the standby location to place the datafiles and the trace files

 

e)      Set the environment variable ORACLE_SID to the standby service and start the standby-instance.

 

export ORACLE_SID= example2

startup nomount pfile=$ORACLE_HOME/dbs/initSID.ora

 

f)       Verify the connection ‘AS SYSDBA’ is working

 

sqlplus /nolog

SQL> connect sys/<passwd<@ example1 AS SYSDBA

     connect sys/<passwd>@ example2 AS SYSDBA

 

g)      On the primary system invoke the RMAN executable and connect to the primary and the auxiliary database ( i.e., the standby)

 

rman target sys/password@ example1auxiliary sys/password@ example2

 

 

 

 

 

                                run {

allocate channel prmy1 type disk;

allocate channel prmy2 type disk;

allocate channel prmy3 type disk;

allocate channel prmy4 type disk;

allocate auxiliary channel stby type disk;

 

duplicate target database for standby from active database

spfile

  parameter_value_convert ‘/primary_VAH/’,’/standby_VAH/’

  set db_unique_name= example2′

  set db_file_name_convert=’/primary_VAH/’,’/standby_VAH/’

  set log_file_name_convert=’/primary_VAH/’,’/standby_VAH/’

  set control_files=’/standby_VAH/oracle/dbdata/sys01/CPPP1/oCPPP2CF01.dbf’

  set log_archive_max_processes=’5′

  set fal_client=’ example2’

  set fal_server= ‘example1′

  set standby_file_management=’AUTO’

  set log_archive_config=’dg_config=( example1, example2)’

  set log_archive_dest_1=’service=<prim_db> ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name= example1’

;

}

 

 

  1. 4.     Now connect to standby sqlplus and start the MRP (Managed Recovery Process). Compare the primary last sequence and MRP (Managed Recovery Process)applying sequence.

 

a)      Do the following changes in standby side:

alter system set log_archive_dest_1=’location=/VAH/oracle/admin/ example2/arch/’ scope=both;

 

alter system set log_archive_dest_2=’service= example1 ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name= example1′ scope=both;

 

alter system set log_archive_dest_state_2=defer scope=both;

 

b)      Start MRP process in standby side

 

Alter database recover managed standby database disconnect from session;

Useful Notes(commands) for DR :

                         @@@@@@@@@@@@@@@@

Startup commands

#############################

startup nomount

alter database mount standby database;

alter database recover managed standby database disconnect;

 

Active data guard

############################

alter database recover managed standby database cancel;

ALTER DATABASE OPEN READ ONLY;

alter database recover managed standby database disconnect;

 

For checking the Primary and DR sync :

In the primary side

#####################################

alter system switch logfile; (3 times)

checking the primary and DR sync

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

select thread#, max(sequence#) from v$archived_log group by thread#;  (IN PRIMARY)

select thread#, max(sequence#) from v$archived_log where applied=’YES’ group by thread#; (IN SECONDARY)

select message from v$dataguard_status;(on standby side)

 

 

 

 

############################################################################

Check on the standby database which all process is running: (RFS and MRP is running or not?)

#############################################################################

SQL> select process,status from v$managed_standby;

PROCESS   STATUS

——— ————

ARCH      CONNECTED

ARCH      CONNECTED

ARCH      CONNECTED

ARCH      CONNECTED

RFS       IDLE

RFS       RECEIVING

MRP0      WAIT_FOR_LOG

 

7 rows selected.

 

################################################################

Check the below parameter has been configured properly or not?

#################################################################

SQL> show parameter DB_FILE_NAME_CONVERT

NAME                                 TYPE        VALUE

———————————— ———– ——————————

db_file_name_convert                 string      <+DISK_NAME>/, <+DISK_NAME>/

                                               

SQL> show parameter LOG_FILE_NAME_CONVERT

NAME                                 TYPE        VALUE

———————————— ———– ——————————

log_file_name_convert                string       <+DISK_NAME>/, <+DISK_NAME>/

                                              

alter system set db_file_name_convert=(‘<+DISK_NAME>/’,'<+DISK_NAME>/’) scope=both sid=’*’;

 

SQL> SQL> show parameter standby_file_management

NAME                                 TYPE        VALUE

———————————— ———– ——————————

standby_file_management              string      AUTO

 

@@@@@@@@@@@@@@@@@@@@

Trouble shooting:

@@@@@@@@@@@@@@@@@@@@

 

1. If you are getting (ORA-16191: Primary log shipping client not logged on standby)

SQL> select DEST_ID,ERROR,RECOVERY_MODE from v$archive_dest_status;

   DEST_ID ERROR                                                             RECOVERY_MODE

———- —————————————————————– ———————–

         1                                                                   IDLE

         2 ORA-16191: Primary log shipping client not logged on standby      IDLE

 

         3                                                                   IDLE

#######################################################################################

Then ACTION PLAN (IN The Primary Side : defer and enable the log_archive_dest_state_2)

########################################################################################

SQL> alter system set log_archive_dest_state_2=defer scope=both sid=’*’;

System altered.

SQL> alter system set log_archive_dest_state_2=enable scope=both sid=’*’;

System altered.

 

 

@@@@@@@@@@@

SQL> select error,status from v$archive_dest_status where dest_id=2;

ERROR                                                             STATUS

—————————————————————– ———

ORA-12514: TNS:listener does not currently know of service        ERROR

requested in connect descriptor

 

solution: Reset the log_archive_dest_2 (we can also try the Tns entry)

@@@@@@@@@@@@

SQL> alter system set log_archive_dest_2=’service=service_name async’ sid=’*’ scope=both ;

System altered.

 

 

 

 

Scripts related to TEMP TABLESPACE

 To check instance-wise total allocated, total used TEMP for both rac and non-rac

 

set lines 152

col FreeSpaceGB format 999.999

col UsedSpaceGB format 999.999

col TotalSpaceGB format 999.999

col host_name format a30

col tablespace_name format a30

select tablespace_name,

(free_blocks*8)/1024/1024 FreeSpaceGB,

(used_blocks*8)/1024/1024 UsedSpaceGB,

(total_blocks*8)/1024/1024 TotalSpaceGB,

i.instance_name,i.host_name

from gv$sort_segment ss,gv$instance i where ss.tablespace_name in (select tablespace_name from dba_tablespaces where contents=’TEMPORARY’) and

i.inst_id=ss.inst_id;

 

Total Used and Total Free Blocks

 

select inst_id, tablespace_name, total_blocks, used_blocks, free_blocks  from gv$sort_segment;

 

Another Query to check TEMP USAGE

 

col name for a20

SELECT d.status “Status”, d.tablespace_name “Name”, d.contents “Type”, d.extent_management

“ExtManag”,

TO_CHAR(NVL(a.bytes / 1024 / 1024, 0),’99,999,990.900′) “Size (M)”, TO_CHAR(NVL(t.bytes,

0)/1024/1024,’99999,999.999′) ||’/’||TO_CHAR(NVL(a.bytes/1024/1024, 0),’99999,999.999′) “Used (M)”,

TO_CHAR(NVL(t.bytes / a.bytes * 100, 0), ‘990.00’) “Used %”

FROM sys.dba_tablespaces d, (select tablespace_name, sum(bytes) bytes from dba_temp_files group by

tablespace_name) a,

(select tablespace_name, sum(bytes_cached) bytes from

v$temp_extent_pool group by tablespace_name) t

WHERE d.tablespace_name = a.tablespace_name(+) AND d.tablespace_name = t.tablespace_name(+)

AND d.extent_management like ‘LOCAL’ AND d.contents like ‘TEMPORARY’;

 

Temporary Tablespace groups

 

SELECT * FROM DATABASE_PROPERTIES where PROPERTY_NAME=’DEFAULT_TEMP_TABLESPACE’;

 

select tablespace_name,contents from dba_tablespaces where tablespace_name like ‘%TEMP%’;

 

select * from dba_tablespace_groups;

 

Block wise Check

 

select TABLESPACE_NAME, TOTAL_BLOCKS, USED_BLOCKS, MAX_USED_BLOCKS, MAX_SORT_BLOCKS, FREE_BLOCKS from V$SORT_SEGMENT;

 

select sum(free_blocks) from gv$sort_segment where tablespace_name = ‘TEMP’;

To Check Percentage Usage of Temp Tablespace

 

select (s.tot_used_blocks/f.total_blocks)*100 as “percent used”

from (select sum(used_blocks) tot_used_blocks

from v$sort_segment where tablespace_name=’TEMP’) s,

(select sum(blocks) total_blocks

from dba_temp_files where tablespace_name=’TEMP’) f;

 

To check Used Extents ,Free Extents available in Temp Tablespace

 

SELECT tablespace_name, extent_size, total_extents, used_extents,free_extents, max_used_size FROM v$sort_segment;

 

To list all tempfiles of Temp Tablespace

 

col file_name for a45

select tablespace_name,file_name,bytes/1024/1024,maxbytes/1024/1024,autoextensible from dba_temp_files  order by file_name;

 

SELECT d.tablespace_name tablespace , d.file_name filename, d.file_id fl_id, d.bytes/1024/1024

size_m

, NVL(t.bytes_cached/1024/1024, 0) used_m, TRUNC((t.bytes_cached / d.bytes) * 100) pct_used

FROM

sys.dba_temp_files d, v$temp_extent_pool t, v$tempfile v

WHERE (t.file_id (+)= d.file_id)

AND (d.file_id = v.file#);

 

Additional checks

 

select distinct(temporary_tablespace) from dba_users;

 

select username,default_tablespace,temporary_tablespace from dba_users order by temporary_tablespace;

 

SELECT * FROM DATABASE_PROPERTIES where PROPERTY_NAME=’DEFAULT_TEMP_TABLESPACE’;

 

Changing the default temporary Tablespace

 

SQL> alter database default temporary tablespace TEMP;

 

Database altered.

 

To add tempfile to Temp Tablespace

 

alter tablespace  temp  add tempfile ‘&tempfilepath’ size 1800M;

 

alter tablespace temp add tempfile ‘/m001/oradata/SID/temp02.dbf’ size 1000m;

 

alter tablespace TEMP add tempfile ‘/SID/oradata/data02/temp04.dbf’ size 1800M autoextend on maxsize 1800M;

 

To resize the  tempfile in Temp Tablespace

 

alter database tempfile ‘/u02/oradata/TESTDB/temp01.dbf’ resize 250M

 

alter database tempfile ‘/SID/oradata/data02/temp12.dbf’ autoextend on maxsize 1800M;

 

alter tablespace TEMP add tempfile ‘/SID/oradata/data02/temp05.dbf’ size 1800m reuse;

 

To find Sort Segment Usage by Users

 

select username,sum(extents) “Extents”,sum(blocks) “Block”

from v$sort_usage

group by username;

 

To find Sort Segment Usage by a particular User

 

SELECT s.username,s.sid,s.serial#,u.tablespace, u.contents, u.extents, u.blocks

FROM v$session s, v$sort_usage u

WHERE s.saddr=u.session_addr

order by u.blocks desc;

 

To find Total Free space in Temp Tablespace

 

select ‘FreeSpace  ‘ || (free_blocks*8)/1024/1024 ||’ GB’  from v$sort_segment where tablespace_name=’TEMP’;

 

select tablespace_name , (free_blocks*8)/1024/1024  FreeSpaceInGB,

(used_blocks*8)/1024/1024  UsedSpaceInGB,

(total_blocks*8)/1024/1024  TotalSpaceInGB

from v$sort_segment where tablespace_name like ‘%TEMP%’

 

To find  Total Space Allocated for Temp Tablespace

 

select ‘TotalSpace ‘ || (sum(blocks)*8)/1024/1024 ||’ GB’  from dba_temp_files where tablespace_name=’TEMP’;

 

Get 10 sessions with largest temp usage

 

cursor bigtemp_sids is

select * from (

select s.sid,

s.status,

s.sql_hash_value sesshash,

u.SQLHASH sorthash,

s.username,

u.tablespace,

sum(u.blocks*p.value/1024/1024) mbused ,

sum(u.extents) noexts,

nvl(s.module,s.program) proginfo,

floor(last_call_et/3600)||’:’||

floor(mod(last_call_et,3600)/60)||’:’||

mod(mod(last_call_et,3600),60) lastcallet

from v$sort_usage u,

v$session s,

v$parameter p

where u.session_addr = s.saddr

and p.name = ‘db_block_size’

group by s.sid,s.status,s.sql_hash_value,u.sqlhash,s.username,u.tablespace,

nvl(s.module,s.program),

floor(last_call_et/3600)||’:’||

floor(mod(last_call_et,3600)/60)||’:’||

mod(mod(last_call_et,3600),60)

order by 7 desc,3)

where rownum < 11;

 

Displays the amount of IO for each tempfile

 

SELECT SUBSTR(t.name,1,50) AS file_name,

f.phyblkrd AS blocks_read,

f.phyblkwrt AS blocks_written,

f.phyblkrd + f.phyblkwrt AS total_io

FROM   v$tempstat f,v$tempfile t

WHERE  t.file# = f.file#

ORDER BY f.phyblkrd + f.phyblkwrt DESC;

 

select * from (SELECT u.tablespace, s.username, s.sid, s.serial#, s.logon_time, program, u.extents, ((u.blocks*8)/1024) as MB,

i.inst_id,i.host_name

FROM gv$session s, gv$sort_usage u ,gv$instance i

WHERE s.saddr=u.session_addr and u.inst_id=i.inst_id  order by MB DESC) a where rownum<10;

 

Check for ORA-1652

 

show parameter background

 

cd <background dump destination>

 

ls -ltr|tail

 

view <alert log file name>

 

shift + G —> to get the tail end…

 

?ORA-1652 —- to search of the error…

 

shift + N —- to step for next reported error…

 

I used these queries to check some settings:

 

— List all database files and their tablespaces:

select  file_name, tablespace_name, status

,bytes   /1000000  as MB

,maxbytes/1000000  as MB_max

from dba_data_files ;

 

— What temporary tablespace is each user using?:

select username, temporary_tablespace, default_tablespace from dba_users ;

 

— List all tablespaces and some settings:

select tablespace_name, status, contents, extent_management

from dba_tablespaces ;

TABLESPACE_NAME                CONTENTS  EXTENT_MAN STATUS

—————————— ——— ———- ———

SYSTEM                         PERMANENT DICTIONARY ONLINE

TOOLS                          PERMANENT DICTIONARY ONLINE

TEMP                           TEMPORARY DICTIONARY OFFLINE

TMP                            TEMPORARY LOCAL      ONLINE

 

Now, the above query and the storage clause of the old ‘create tablespace TEMP’ command seem to tell us the tablespace only allows temporary objects, so it should be safe to assume that no one created any tables or other permanent objects in TEMP by mistake, as I think Oracle would prevent that. However, just to be absolutely certain, I decided to double-check. Checking for any tables in the tablespace is very easy:

 

— Show number of tables in the TEMP tablespace – SHOULD be 0:

select count(*)  from dba_all_tables

where tablespace_name = ‘TEMP’ ;

 

Checking for any other objects (views, indexes, triggers, pl/sql, etc.) is trickier, but this query seems to work correctly – note that you’ll probably need to connect internal in order to see the sys_objects view:

 

— Shows all objects which exist in the TEMP tablespace – should get

— NO rows for this:

column owner        format a20

column object_type  format a30

column object_name  format a40

select

o.owner  ,o.object_name

,o.object_type

from sys_objects s

,dba_objects o

,dba_data_files df

where df.file_id = s.header_file

and o.object_id = s.object_id

and df.tablespace_name = ‘TEMP’ ;

 

Identifying WHO is currently using TEMP Segments

 

10g onwards

 

SELECT sysdate,a.username, a.sid, a.serial#, a.osuser, (b.blocks*d.block_size)/1048576 MB_used, c.sql_text

FROM v$session a, v$tempseg_usage b, v$sqlarea c,

     (select block_size from dba_tablespaces where tablespace_name=’TEMP’) d

    WHERE b.tablespace = ‘TEMP’

    and a.saddr = b.session_addr

    AND c.address= a.sql_address

    AND c.hash_value = a.sql_hash_value

    AND (b.blocks*d.block_size)/1048576 > 1024

    ORDER BY b.tablespace, 6 desc;

 

How to drop Oracle RAC database manually?

Take Example:
Our Oracle RAC database name is RACDB
Our Oracle RAC instances are 2 and both names RAC1 and RAC2.

Dropping Oracle RAC database step by step as follows:

1) First use command line and Stop database using srvctl command.
srvctl stop database -d RACDB

2) Now remove the database entry from crs (Cluster Ready Services)
srvctl remove instance -d RACDB -i RAC1

srvctl remove instance -d RACDB -i RAC2

srvctl remove database -d RACDB

3) After finishing above steps, Start the database on first instance using sqlplus.
SQL> startup

SQL> alter system set cluster_database=false scope=spfile;

SQL> alter system set cluster_database_instances=1 scope=spfile;

SQL> alter database disable thread 2;

4) Delete the unwanted thread and redo logfiles. Thread 1 is for active instance and other is for another instance. Drop all redo group of other thread. Ex: Group 4,5,6 are for other thread then drop as follows.

SQL> select thread#, group# from v$log;

SQL> alter database drop logfile group 4;

SQL> alter database drop logfile group 5;

SQL> alter database drop logfile group 6;

5) Drop the unwanted undo tablespace

SQL> drop tablespace UNDOTBS2 including contents and datafiles; create pfile from spfile;

SQL> shut immediate

After shutdowning Oracle database kindly ensure to “shutdown immediate” command should need to use. Don’t use “shutdown abort”

7) Now start database again in mount stage

SQL>startup mount restrict

8) Drop database

SQL>drop database;

Our Oracle RAC database will be dropped with Oracle RAC instance smoothly.

 

Performing a Switchover Operation using Dataguard Broker.

You can switch the role of the primary database and a standby database using the SWITCHOVER command. Before you issue the SWITCHOVER command, you must ensure:

  • The state of the primary and standby databases are TRANSPORT-ON and APPLY-ON, respectively.
  • All participating databases are in good health, without any errors or warnings present.
  • The standby database properties were set on the primary database, so that the primary database can function correctly when transitioning to a standby database (shown in the following examples in boldface type).
  • Standby redo log files on the primary database are set up, and the LogXptMode configurable database property is set to SYNC if the configuration is operating in either maximum availability mode or maximum protection mode.
  • If fast-start failover is enabled, you can perform a switchover only to the standby database that was specified as the target standby database.

 

Step 1   Check the primary database.

Use the SHOW DATABASE VERBOSE command to check the state, health, and properties of the primary database, as follows:

DGMGRL> SHOW DATABASE VERBOSE ‘North_Sales’;

 

Database

 Name:            North_Sales

 Role:            PRIMARY

 Enabled:         YES

 IntendedState:  TRANSPORT-ON

 Instance(s):

   sales1

 

 Properties:

   DGConnectIdentifier             = ‘North_Sales.foo.com’

   ObserverConnectIdentifier       = ”

   LogXptMode                      = ‘SYNC’  

   DelayMins                       = ‘0’

   Binding                         = ‘OPTIONAL’

   MaxFailure                      = ‘0’

   MaxConnections                  = ‘1’

   ReopenSecs                      = ‘300’

   NetTimeout                      = ’30’

   RedoCompression                 = ‘DISABLE’

   LogShipping                     = ‘ON’

   PreferredApplyInstance          = ”

   ApplyInstanceTimeout            = ‘0’

   ApplyParallel                   = ‘AUTO’

   StandbyFileManagement           = ‘AUTO’

   ArchiveLagTarget                = ‘0’

   LogArchiveMaxProcesses          = ‘5’

   LogArchiveMinSucceedDest        = ‘1’

   DbFileNameConvert               = ‘dbs/bt, dbs/t’

   LogFileNameConvert              = ‘dbs/bt, dbs/t’

   FastStartFailoverTarget         = ‘DR_Sales’

   StatusReport                    = ‘(monitor)’

   InconsistentProperties          = ‘(monitor)’

   InconsistentLogXptProps         = ‘(monitor)’

   SendQEntries                    = ‘(monitor)’

   LogXptStatus                    = ‘(monitor)’

   RecvQEntries                    = ‘(monitor)’

   HostName                        = ‘North_Sales.foo.com’

   SidName                         = ‘sales1’

   StandbyArchiveLocation          = ‘/archfs/arch/’

   AlternateLocation               = ”

   LogArchiveTrace                 = ‘8191’

   LogArchiveFormat                = ‘db1r_%d_%t_%s_%r.arc’

   LatestLog                       = ‘(monitor)’

   TopWaitEvents                   = ‘(monitor)’

 

Current status for “North_Sales”:

SUCCESS

In particular, you should examine the boldface properties and the current status of the primary database. See Chapter 4 for information about managing databases.

Step 2   Check the standby database that is the target of the switchover.

Use the SHOW DATABASE VERBOSE command to check the state, health, and properties of the standby database that is the target of the switchover. For example:

DGMGRL> SHOW DATABASE VERBOSE ‘DR_Sales’;

 

Database

 Name:            DR_Sales

 Role:            PHYSICAL STANDBY

 Enabled:         NO

 IntendedState:  APPLY-ON

 Instance(s):

   dr_sales1

 

 Properties:

   DGConnectIdentifier             = ‘DR_Sales.foo.com’

   ObserverConnectIdentifier       = ”

   LogXptMode                      = ‘SYNC’

   DelayMins                       = ‘0’

   Binding                         = ‘OPTIONAL’

   MaxFailure                      = ‘0’

   MaxConnections                  = ‘1’

   ReopenSecs                      = ‘300’

   NetTimeout                      = ’30’

   RedoCompression                 = ‘DISABLE’

   LogShipping                     = ‘ON’

   PreferredApplyInstance          = ”

   ApplyInstanceTimeout            = ‘0’

   ApplyParallel                   = ‘AUTO’

   StandbyFileManagement           = ‘AUTO’

   ArchiveLagTarget                = ‘0’

   LogArchiveMaxProcesses          = ‘5’

   LogArchiveMinSucceedDest        = ‘1’

   DbFileNameConvert               = ‘dbs/t, dbs/bt’

   LogFileNameConvert              = ‘dbs/t, dbs/bt’

   FastStartFailoverTarget         = ”

   StatusReport                    = ‘(monitor)’

   InconsistentProperties          = ‘(monitor)’

   InconsistentLogXptProps         = ‘(monitor)’

   SendQEntries                    = ‘(monitor)’

   LogXptStatus                    = ‘(monitor)’

   RecvQEntries                    = ‘(monitor)’

   HostName                        = ‘dr_sales.foo.com’

   SidName                         = ‘dr_sales1’

   StandbyArchiveLocation          = ‘/archfs/arch’

   AlternateLocation               = ”

   LogArchiveTrace                 = ‘8191’

   LogArchiveFormat                = ‘db2r_%d_%t_%s_%r.arc’

   LatestLog                       = ‘(monitor)’

   TopWaitEvents                   = ‘(monitor)’

 

Current status for “DR_Sales”:

SUCCESS

In particular, you should examine the current status of the database.

Step 3   Issue the switchover command.

Issue the SWITCHOVER command to swap the roles of the primary and standby databases. The following example shows how the broker automatically shuts down and restarts the old primary database as a part of the switchover. (See the usage notes in Section 8.1.3 for information about how to set up the broker environment so that DGMGRL can automatically restart the primary and standby databases for you.)

DGMGRL> switchover to ‘DR_Sales’;

Performing switchover NOW, please wait…

New primary database “DR_Sales” is opening…

Operation requires shutdown of instance “sales1” on database “North_Sales”

Shutting down instance “sales1″…

ORA-01109: database not open

 

Database dismounted.

ORACLE instance shut down.

Operation requires startup of instance “sales1” on database “North_Sales”

Starting instance “sales1″…

ORACLE instance started.

Database mounted.

Switchover succeeded, new primary is “DR_Sales”

After the switchover completes, use the SHOW CONFIGURATION and SHOW DATABASE commands to verify that the switchover operation was successful.

Step 4   Show the configuration.

Issue the SHOW CONFIGURATION command to verify that the switchover was successful.

DGMGRL> SHOW CONFIGURATION;

 

Configuration

 Name:                DRSolution

 Enabled:             YES

 Protection Mode:     MaxAvailability

 Databases:

   DR_Sales     – Primary database

   North_Sales  – Physical standby database

                – Fast-Start Failover target

 

Fast-Start Failover: ENABLED

 

Current status for “DRSolution”:

SUCCESS

 

Switchover and Failover for Oracle RAC Database Using Physical Standby

Switchover

 

1)      This is for:

  1. Scheduled and unscheduled maintenance at primary site (e.g., OS patching, database patching).
  2. A primary database that is running normally — the switchover operation must start from the primary site.
  3. Application DR Tests — switchover can be used for a DR test that tests non-database parts (Failover is the right procedure for database DR test).

2)      Steps

1.   Shutdown all primary instances except one.

srvctl stop instance -i <instance_name> -d <database_name>

srvctl staus database -d <database_name>

2.   Shutdown all standby instances except one.

srvctl stop instance -i <instance_name> -d <database_name>

srvctl staus database -d <database_name>

  1. On the primary site

  i.   SQL> select switchover_status from V$database;

 ii.            If switchover_status shows ‘sessions active’ go to step iii. If switchover_status shows ‘TO_STANDBY’ go to step v.

 iii.            SQL> select sid, process, program from v$session where type=’user’ and sid <>(select distinct sid from V$mystat);

iv.            If there are rows returned, contact customers and ask if the proceses can be killed. If yes go to step v. Otherwise stop here until they log out.

 v.            SQL> alter database commit to switchover to physical standby with session shutdown;

vi.            SQL>shutdown immediate;

vii.            SQL>exit;

viii.            Restart the instance and mount it as standby

         SQL> startup nomount;

         SQL> alter database mount standby database;

         SQL> alter system SET log_archive_dest_state_2=’DEFER’ scope=both sid=’*’;

 ix.            Start the recovery

         SQL> alter database recover managed standby database disconnect;

  1. On the DR site.

  i.            SQL>alter database recover managed standby database cancel;

 ii.            SQL>alter database recover managed standby database nodelay;

iii.            Disconnect from session through last switchover; — this is needed when a time lag used.

 iv.            SQL> alter database commit to switchover to primary with session shutdown;

 v.            SQL> alter database open;

 vi.            SQL> ALTER SYSTEM SET log_archive_dest_state_2=’enable’ scope=both sid=’*’;

  1. On the original primary site mount the other instances as standby.
  2. On the original DR site start up and open the other instances (as primary).
  3. Verify the new data guard configuration

Verify the new data guard configuration

In the primary side

*********************

alter system switch logfile; (3 times)

Checking the primary and DR sync :

********************************

select thread#, max(sequence#) from v$archived_log group by thread#;  (IN PRIMARY)

 select thread#, max(sequence#) from v$archived_log where applied=’YES’ group by thread#; (IN SECONDARY)

 select message v$dataguard_status; (IN DR Standby)

i.            On new primary

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

 ii.            On new standby — make sure the log file is copied over and applied or will be applied (if time lag in apply is used).

Failover

1)      This is for:

  1. A situation when primary database is completely gone and can’t be recovered within a time window as defined by business units.
  2. A real DR test — failover operation is performed completely on DR site. There is no dependency on DR site.
  3. Repairing or trouble shooting a database due to human mistakes or data corruption — time lag has to be used in this case.  
  4. Documentum is currently using 4 hr time lag.

2)      Steps

  1. Identify and resolve any gaps in the archived redo logfiles.

 i.            SQL>  select thread#, low_sequence#, high_ sequence# from v$archive_gap

 ii.            SQL>  alter database register physical logfile ‘2_820_657623969.log’

  1. Identify copy and register any other missing archived redo logfiles.

i.            SQL>  select unique thread# as thread, max(sequence#) over (partition by thread#) as last from v$archived_log;

 ii.            SQL>  alter database register physical logfile ‘1_820_657623970.log’

  1. We should use log writer (lgwr) and standby redo for log transfer, which will help avoid steps a and b.
  2. On DR site shutdown all the standby instances except one.
  3. On DR site initiate a failover.

 i.            SQL> alter database recover managed standby database finish force;

ii.            SQL> alter database commit to switchover to primary;

 iii.            SQL> alter database open;

 iv.            SQL> shutdown immediate;

 v.            SQL> startup;

f.    Failover with a time lag and without left logs applied. This is only for repairing or trouble shooting a database due to human mistakes or data corruption.

 i.            SQL> alter database activate physical standby database;

 ii.            SQL> shutdown immediate;

  iii.            SQL> startup;

  1. Startup and open other instances on DR site.

h.   Rebuild physical standby on the original primary site after it comes back.

FAL_CLIENT and FAL_SERVER Parameters

FAL (Fetch Archive Log)

Under certain circumstances, either because of network failures or a busy source environment, a gap builds up between the target sending the information and the destination receiving the changes and applying them. Since the MRP/LSP process has no direct communication link with the primary database to obtain a copy of the missing archive files, such gaps or archive gaps are resolved by the fetch archive log (FAL) client and server,identified by the initialization parameters FAL_CLIENT and FAL_SERVER.

 

FAL_SERVER specifies the FAL (fetch archive log) server for a standby database.The value is an Oracle Net service name, which is assumed to be configured properly on the standby database system to point to the desired FAL server.

FAL_CLIENT specifies the FAL (fetch archive log) client name that is used by the FAL service, configured through the FAL_SERVER parameter, to refer to the FAL client. The value is an Oracle Net service name, which is assumed to be configured properly on the FAL server system to point to the FAL client (standby database). Given the dependency of FAL_CLIENT on FAL_SERVER, the two parameters should be configured or changed at the same time.

FAL_CLIENT and FAL_SERVER are initialization parameters used to configure log gap detection and resolution at the standby database side of a physical database configuration. This functionality is provided by log apply services and is used by the physical standby database to manage the detection and resolution of archived redo logs.

FAL_CLIENT and FAL_SERVER only need to be defined in the initialization parameter file for the standby database(s). It is possible; however, to define these two parameters in the initialization parameter for the primary database server to ease the amount of work that would need to be performed if the primary database were required to transition its role.

In Primary site:

FAL_SERVER=STANDBY
FAL_CLIENT=PRIMARY

In Standby site:

FAL_SERVER=PRIMARY
FAL_CLIENT=STANDBY

 

Data Guard 11g’s Automatic Gap Resolution and ORA-16401 Error :

A log file gap occurs whenever a primary database continues to commit transactions while the LNS process has ceased transmitting redo to the standby database. This can happen when the network or the standby database is down and your Data Guard protection mode is not Maximum Protection. The primary database’s LGWR process continues writing to the current ORL (online redo log), fills it, and then switch to a new ORL while an archive (ARCH) process archives the completed ORL locally. This cycle can repeat itself many times on a busy system before the connection between the primary and the standby is restored, resulting a large log file gap.

Data Guard uses an ARCH process on the primary database to continuously ping the standby database during the outage to determine its status. When the communication with the standby is restored, the ARCH ping process queries the standby control file (via its RFS process) to determine the last complete log file that the standby received from the primary database. Data Guard determines which log files are required to resynchronize the standby database and immediately begins transmitting them using additional ARCH processes. At the very next log switch, the LNS will attempt and succeed in making a connection to the standby database and will begin transmitting current redo while the ARCH processes resolve the gap in the background. Once the standby apply process is able to catch up the current redo records, the apply process automatically transitions out of reading from archived redo logs and into reading from the current SRL (Standby Redo Log).

The performance of automatic gap resolution is critical. The primary must be able to transmit data at a much faster pace than its normal redo generation rate if the standby is to have any hope of catching up. The Data Guard architecture enables gaps to be resolved quickly using multiple background ARCH processes, while at the same time the LNS process is conducting normal SYNC or ASYNC transmission of the current redo stream.

FAL is Data Guard’s capability of Fetch Archive Log. It is only used on a physical standby database. When a physical standby database finds a problem of missing log file, it can go and fetch it from one of the databases (primary or standby) in the Data Guard configuration. This is also referred as reactive gap resolution. However nowadays most of gap requests from a physical or logical standby database can be handled by the ping process of the primary database as mentioned above.

FAL_SERVER parameter is defined as a list of TNS entries that exist on the standby server and point to the primary and/or any of the standby databases.

FAL_CLIENT is the TNS entry of the gap-requesting database that the receiver of the request needs so that the archive process on the FAL server can connect back to the sender of the request. FAL_CLIENT is optional. Oracle Support recommends not to set it. Instead DB_UNIQUE_NAME of the sender of the request is used to match that of a LOG_ARCHIVE_DEST_n.

However if you do set FAL_CLIENT in your standby database, you need to make sure the TNS entry you use is the same as that used in LOG_ARCHIVE_DEST_n of the FAL server. Otherwise you will receive ORA-16401 error. Following example demonstrates this case.

TNS entry for the primary database:

PSDL1I_sitka=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sitka)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = psdl1i.sitka)
)
)

The standby TNS entry:
PSDL1I_sanfords=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sanfords)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME=PSD_STANDBY.sanfords)
)
)

Both these two entries are in tnsnames.ora file on both database servers, primary and standby. On the standby server, there is also a fal_client entry, which points to the same database as the standby TNS entry:

PSDL1I_fal_client =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sanfords)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME=PSD_STANDBY.sanfords)
)
)

The FAL parameters in the standby database are set as:

fal_client  = PSDL1I_fal_client
fal_server = PSDL1I_sitka

When there is a redo gap, primary shipped particular log seq# to a destination pointed by:

log_archive_dest_2=’ service=”PSDL1I_sanfords”, ASYNC NOAFFIRM db_unique_name=”PSD_STANDBY” valid_for=(all_logfiles,primary_role)’

The standby can do its own GAP analysis and can request logs from the FAL_SERVER. The FAL server, in this case the primary, will try to honor that request. When standby attempts to resolve a gap, primary gets a different fal_client=PSDL1I_fal_client. In fact PSDL1I_fal_client  and PSDL1I_sanfords point to the same standby.

In standby alert log file we get below error:

Thu Dec 29 13:40:47 2011
ARC2: Archive log rejected (thread 1 sequence 188) at host ‘PSDL1I_sanfords’
FAL[server, ARC2]: FAL archive failed, see trace file.
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance PSDL1I – Archival Error. Archiver continuing.

In ARCH trace file we see below error message:

*** 2011-12-29 13:40:47.903
Error 16401 creating standby archive log file at host ‘PSDL1I_sanfords’
kcrrwkx: unknown error:16401

To fix this problem we need to change FAL_CLIENT of the standby to PSDL1I_sanfords, the same TNS entry as the one used in the primary LOG_ARCHIVE_DEST_2. Now the FAL request from the standby is the same as the FAL request created by regular LADn redo shipping, and we will not create the second FRB (FAL Request Block). Consequently ORA-16401 error is avoided.
 

 

Fixed Objects Statistics and why they are important

Fixed objects are the x$ tables and their indexes. The v$performance views in Oracle are defined in top of X$ tables (for example V$SQL and V$SQL_PLAN). Since V$ views can appear in SQL statements like any other user table or view then it is important to gather optimizer statistics on these tables to help the optimizer generate good execution plans. However, unlike other database tables, dynamic sampling is not automatically use for SQL statement involving X$ tables when optimizer statistics are missing. The Optimizer uses predefined default values for the statistics if they are missing. These defaults may not be representative and could potentially lead to a suboptimal execution plan, which could cause severe performance problems in your system. It is for this reason that we strong recommend you gather fixed objects statistics.

Fixed Object statistics must be manually gathered. They are not created or maintained by the automatic statistics gathering job. You can collect statistics on fixed objects using DBMS_STATS.GATHER_FIXED_OBJECTS_STATS. BEGIN
DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
END;

The DBMS_STATS.GATHER_FIXED_OBJECTS_STATS procedure gathers the same statistics as DBMS_STATS.GATHER_TABLE_STATS except for the number of blocks. Blocks is always set to 0 since the x$ tables are in memory structures only and are not stored on disk. You must have the ANALYZE ANY DICTIONARY or SYSDBA privilege or the DBA role to update fixed object statistics.

Because of the transient nature of the x$ tables it is import that you gather fixed object statistics when there is a representative workload on the system. This may not always be feasible on large system due to additional resource need to gather the statistics. If you can’t do it during peak load you should do it after the system has warmed up and the three key types of fixed object tables have been populated:

Structural data – for example, views covering datafiles, controlfile contents, etc
Session based data – for example, v$session, v$access, etc.
Workload data – for example, v$sql, v$sql_plan,etc

It is recommended that you re-gather fixed object statistics if you do a major database or application upgrade, implement a new module, or make changes to the database configuration. For example if you increase the SGA size then all of the x$ tables that contain information about the buffer cache and shared pool may change significantly, such as x$ tables used in v$buffer_pool or v$shared_pool_advice.