- 1. What happens during begin backup mode?
Below activity happen during begin backup mode.
- DBWn checkpoints the tablespace (writes out all dirty blocks as of a given SCN)
- CKPT stops updating the Checkpoint SCN field in the datafile headers and begins updating the Hot Backup Checkpoint SCN field instead.
- LGWR begins logging full images of changed blocks the first time a block is changed after being written by DBWn.
Those three actions are all that is required to guarantee consistency once the file is restored and recovery is applied. By freezing the Checkpoint SCN, any subsequent recovery on that backup copy of the file will know that it must commence at that SCN. Having an old SCN in the file header tells recovery that the file is an old one, and that it should look for the archivelog containing that SCN, and apply recovery starting there.
Note that during hot backup mode, checkpoints to datafiles are not suppressed. Only the main Checkpoint SCN flag is frozen, but CKPT continues to update a Hot Backup Checkpoint SCN in the file header.
By initially checkpointing the datafiles that comprise the tablespace and logging full block images to redo, Oracle guarantees that any blocks changed in the datafile while in hot backup mode will also be present in the archivelogs in case they are ever used for a recovery. Most of the Oracle user community knows that Oracle generates a greater volume of redo during hot backup mode. This is the result of Oracle logging of full images of changed blocks in these tablespaces. Normally, Oracle writes a change vector to the redologs for every change, but it does not write the whole image of the database block. Full block image logging during backup eliminates the possibility that the backup will contain unresolvable split blocks. To understand this reasoning, you must first understand what a split block is.
Typically, Oracle database blocks are a multiple of O/S blocks. For example, most Unix filesystems have a default block size of 512 bytes, while Oracle’s default block size is 8k. This means that the filesystem stores data in 512 byte chunks, while Oracle performs reads and writes in 8k chunks or multiples thereof. While backing up a datafile, your backup script makes a copy of the datafile from the filesystem, using O/S utilities such as copy, dd, cpio, or OCOPY. As it is making this copy, your process is reading in O/S-block-sized increments. If DBWn happens to be writing a DB block into the datafile at the same moment that your script is reading that block’s constituent O/S blocks, your copy of the DB block could contain some O/S blocks from before the database performed the write, and some from after. This would be a split block. By logging the full block image of the changed block to the redologs, Oracle guarantees that in the event of a recovery, any split blocks that might be in the backup copy of the datafile will be resolved by overlaying them with the full legitimate image of the block from the archivelogs. Upon completion of a recovery, any blocks that got copied in a split state into the backup will have been resolved by overlaying them with the block images from the archivelogs.
All of these mechanisms exist for the benefit of the backup copy of the files and any future recovery. They have very little effect on the current datafiles and the database being backed up. Throughout the backup, server processes read datafiles DBWn writes them, just as when a backup is not taking place. The only difference in the open database files is the frozen Checkpoint SCN, and the active Hot Backup Checkopint SCN.
2. How to recover when redolog file is corrupted? Many scenarios :
Recover database when online redolog file is corrupted.
Oracle Database can be operates in two modes.
1. Archive log mode (The archive logs are generated and online backup is possible)
2. No archive log mode (No archive log generated and cold backup is only possible)
The online redolog file is the physical file which is updated with redo information (changed information in the database) from redolog buffer by LGWR
background process. The online redo log files can be of different states.
1 .INACTIVE
2. ACTIVE
3. CURRENT
4. UNUSED
CURRENT: – The online redolog file to which LGWR is currently writing the redo information.
ACTIVE: – The online redolog file to which the LGWR is about to write the redo information once completed with the current online redolog file.
INACTIVE: – The online redolog file to which LGWR has finished its writing.
UNUSED: – The online redolog file which is not used by LGWR to write the redo information.
Now we are going to discuss how to recover the database when online redolog file is corrupted due to some reason. We would have various scenarios, I have
divided the scenarios in terms of archive log mode and no archivelog mode. I would try to cover most of the scenario.
v No Archivelog mode:-
a) Online redolog file in INACTIVE and UNUSED state:-
This would not require crash recovery. Clear the logfile group and recreate the group manually by deleting the log file.
Commands:-
SQL> ALTER DATABASE CLEAR LOGFILE GROUP ;
SQL>ALTER DATABASE DROP LOGFILE GROUP ;
Delete the redolog file from the physical location.
SQL>ALTER DATABASE ADD LOGFILE GROUP (‘/oracle/dbs/log1c.rdo’, ‘/oracle/dbs/log2c.rdo’) SIZE 10M;
b) Online redolog file in ACTIVE state:-
This would require crash recovery. Issue a checkpoint.
Command:-
SQL>ALTER SYSTEM CHECKPOINT;
If the check point failed, then please do a complete recovery of the database from the last full backup. The steps to perform complete recovery are explained
in the complete recovery section.
c) Online redolog file in CURRENT state:-
The LGWR will be terminated in this case and you have to perform complete recovery from cold backup.
v Archivelog mode:-
a) Online redolog file in UNUSED state:-
The steps would be same as that of no archivelog mode.
b) Online redolog file in INACTIVE state:-
Check your redolog file is archive or not.
Command:-
SQL>Select group#, archived from v$log;
Yes:-follow the same step as noarchive log mode.
No:- Issue the below command.
SQL>ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP ;
After this command there would be a gap in the archive log. So do complete recovery from backup.
c) Online redolog file in ACTIVE state:-
Perform a check point and if it fails then perform a complete recovery.
Command:-
SQL>ALTER SYSTEM CHECKPOINT;
d) Online redolog file in CURRENT state:-
The LGWR will be terminated in this case. We would have to perform incomplete recovery.
3.Difference between adpatch & opatch ?
adpatch is utility to apply oracle apps Patches.
opatch is utility to apply database interim patches.
4.Difference between interim patches and patchset?
Interim patch apply for any existing bugs in software/database by using opatch utility. Opatch update files in ORACLE_HOME and patchset apple to upgrade existing oracle release patchset by using Oracle Universal Installer. Patchset upgrade database objects. Example: Upgrade database to new version or patchset (10.2.0.3 to 10.2.0.4)
5. How to check the Opatch version?
Step 1: Set ORACLE_HOME
Step 2: go to /OPatch directory
Step 3: execute:
$ ./opatch version
6. How to Apply OPatch?
$ORACLE_HOME/OPatch/opatch apply
7. What is the difference between opatch napply and opatch apply?What is cpu patch?
Ans : When we are applying only patch we are using opatch whereas napply is used when multiple patch files are used.
Critical Patch Updates are the primary means of releasing security fixes for Oracle products to customers with valid support contracts. They are released on the Tuesday closest to the 17th day of January, April, July and October.
8. Location of Opatch Log files?
$ORACLE_HOME/cfgtoollogs/opatch
9.Can we apply OPatch, if Oracle Inventory has been missed?
No.
10. How to Apply OPatch if Oracle Inventory has been missed?
Need to re-generate Oracle Inventory.
11. Why Database Inventory is required at the time of applying OPatch?
Oracle Inventory is the place where all the information about Oracle Home is stored. OPatch require reading this information to apply patches.
12. Location of Inventory file?
The location of inventory is defined in oraInst.loc file. The path for this file is provided while installing Oracle Home.
If we dont supply any path, the file will be present at central location oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris)
$ cat oraInst.loc
13. Can we change local inventory location at the time of Installation?
We can create local inventory at any location while installing Oracle Home. We need to use -invPtrLoc variable along with runInstaller command and give the location of oraInst.loc. Otherwise the default path assumed for oraInst.loc is /etc/oraInst.loc.
./runInstaller -invPtrLoc /slot/ems2029/appmgr/apps/tech_st/10.1.2/oraInst.loc
14. Difference between local and global inventory?
Global inventory is where your all oracle home on that server is registered. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server
Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.
15. How do I find list of oracle product on machine?
check for file inventory.xml under ContentsXML in oraInventory (Please note if you have multiple global Inventory on machine check all oraInventory directories)
16. How to re-create Global Inventory if corrupted?
We can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option -attachHome
$./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc ORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”
17. How to Rolling back a patch?
Suppose, if you have applied wrong patch or patch did not fix the issue, we can rollback the patch.
Example:
$ORACLE_HOME/OPatch/opatch rollback –id
18. How to Applying bundle patches?
Bundle patches having many patches. Bundle patches containing many other patches to Oracle Home. We can apply bundle patches to Oracle Home using a single command.
Example: (if you want to apply all patches)
$ORACLE_HOME/OPatch/opatch until NApply -phBaseDir /export/home/oracle/ -invPtrLoc /export/home/oracle/oraInst.loc
NApply -> Will apply all patches to RDBMS home.
Example: (if you want to apply specify patches)
$ORACLE_HOME/OPatch/opatch until NApply /export/home/oracle/ -id 1,2,3 –skip_subset –skip_duplicate
This will apply patches 1, 2, and 3 which are under < the patch_location> directory. OPatch will skip duplicate patches and subset patches (patches under that are subsets of patches installed in the ORACLE_HOME)
19. How to know which version is the correct version of Opatch for your RDBMS home?
Read Metalink Note: 357221.1
20. How to Create Oracle Home Inventory in 10g
cd $ORACLE_HOME/oui/bin
./runInstaller -attachHome ORACLE_HOME=”/export/home/oracle/oracle/db_1/10.2.0″ ORACLE_HOME_NAME=”Oracle10g_home1″
21)What are the steps involved during the sql statement exectuion?
Parsing SQL Statements in Oracle
One of the first steps Oracle takes in processing a SQL statement is to parse it. During the parsing phase, Oracle will break down the submitted SQL statement into its component parts, determine what type of statement it is (Query, DML, or DDL), and perform a series of checks on it. Two important concepts for the DBA to understand is (1) what are the steps involved in the parse phase and (2) what is the difference between a hard parse and a soft parse.
The Syntax Check & Semantic Analysis
The first two function of the parse phase Syntax Check and Semantic Analysis happen for each and every SQL statement within the database.
Syntax Check: Oracle checks that the SQL statement is valid. Does it make sense given the SQL grammar documented in the SQL Reference Manual? Does it follow all of the rules for SQL?
Semantic Analysis: This function of the parse phase, takes the Syntax Check one step further by checking if the statement is valid in light of the objects in the database. Do the tables and columns referenced in the SQL statement actually exist in the database? Does the user executing the statement have access to the objects and are the proper privileges in place? Are there ambiguities in the statement? For example, consider a statement that references two tables emp1 and emp2 and both tables have a column name. The following statement “select name from emp1, emp where…” is ambiguous; the query doesn’t know which table to get name from.
Although Oracle considers the first two functions of the parse phase (checking the validity of the SQL statement and then checking the semantics to ensure that the statement can be properly executed), the difference is sometimes hard to see from the users perspective. When Oracle reports an error to the user during the parse phase, it doesn’t just come out and say “Error within the Syntax Function” or “Error within the Semantics Function”.
For example, the following SQL statement fails with a syntax error:
SQL> select from where 4;
select from where 4
*
ERROR at line 1:
ORA-00936: missing expression
Here is an example of a SQL statement that fails with a semantic error:
SQL> select * from table_doesnt_exist;
select * from table_doesnt_exist
*
ERROR at line 1:
ORA-00942: table or view does not exist
Hard Parse vs. Soft Parse
We now consider the next and one of the most important functions of Oracle’s parse phase. The Oracle database now needs to check in the Shared Pool to determine if the current SQL statement being parsed has already been processed by any other sessions.
If the current statement has already been processed, the parse operation can skip the next two functions in the process: Optimization and Row Source Generation. If the parse phase does, in fact, skip these two functions, it is called a soft parse. A soft parse will save a considerable amount of CPU cycles when running your query. On the other hand, if the current SQL statement has never been parsed by another session, the parse phase must execute ALL of the parsing steps. This type of parse is called a hard parse. It is especially important that developers write and design queries that take advantage of soft parses so that parsing phase can skip the optimization and row source generation functions, which are very CPU intensive and a point of contention (serialization). If a high percentage of your queries are being hard-parsed, your system will function slowly, and in some cases, not at all.
Oracle uses a piece of memory called the Shared Pool to enable sharing of SQL statements. The Shared Pool is a piece of memory in the System Global Area (SGA) and is maintained by the Oracle database. After Oracle completes the first two functions of the parse phase (Syntax and Semantic Checks), it looks in the Shared Pool to see if that same exact query has already been processed by another session. Since Oracle has already performed the semantic check, it has already determined:
Exactly what table(s) are involved
That the user has access privileges to the tables.
Oracle will now look at all of the queries in the Shared Pool that have already been parsed, optimized, and generated to see if the hard-parse portion of the current SQL statement has already been done.
22)Difference between locally managed and dictionary managed tablespace?
– DICTIONARY: Specifies that the tablespace is managed using dictionary tables. This is the default in Oracle8i.
– LOCAL: Specifies that tablespace is locally managed. This is the default in Oracle9i. Exception for the SYSTEM tablespace
Locally Managed Tablespaces: A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables (like sys.uet$, sys.fet$) in the data dictionary (except for special cases such as tablespace quota information).
Dictionary Managed Tablespaces: In DMT, to keep track of the free or used status of blocks, oracle uses data dictionry tables. When an extent is allocated or freed for reuse, free space is recorded in the SYS.FET$ table, and used space in the SYS.UET$ table. Whenever space is required in one of these tablespaces, the ST (space transaction) enqueue latch must be obtained to do inserts and deletes agianst these tables. As only one process can acquire the ST enque at a given time, this often lead to contention. These changes generate rollback information because they update tables (like sys.uet$, sys.fet$) in the data dictionary.
Advantages of Locally Managed Tablespaces(LMT) over Dictionary Managed Tablespaces(DMT):
1. Reduced recursive space management
2. Reduced contention on data dictionary tables
3. No rollback generated
4. No coalescing required
Converting DMT to LMT:
SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local(‘ts1’);
PL/SQL procedure successfully completed.
Converting LMT to DMT:
SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local(‘ts2’);
PL/SQL procedure successfully completed.
23.How to determine what should be the size of SGA and PGA ?
SELECT table_name FROM dict WHERE table_name LIKE ‘%ADVICE%’;
V$SGA_TARGET_ADVICE
V$PGA_TARGET_ADVICE
24. What are the difference between active, inactive and idle session?
Inactive: means the session is connected, and idle. The client is not currently making a database call.(nothing is being done, server waits for tasks to be Assigned from the client.)
ACTIVE : Session currently executing SQL.
Sniped : the session has passed the idle_time limit defined in user profile. The session will remain snipped until the client communicates with the db again, when it will get “ORA-02396: exceeded maximum idle time, please connect again” and the session is removed from v$session.
Sniped means the system has killed the session, but the client doesn’t know it.
25. What are the contents of alert log file?
Oracle’s alert.log chronologically records messages and errors arising from the daily database operation. Also, there are pointers to trace files and dump files.
These messages include:
- startups and shutdowns of the instance
- Messages to the operator console
- Errors causing trace files.
- Create, alter and drop SQL statements on databases, tablespaces and rollback segments.
- Errors when a materialized view is refreshed.
- ORA-00600 (internal) errors.
- ORA-01578 errors (block corruption)
- ORA-00060 errors (deadlocks)
alert.log is a text file that can be opened with any text editor. The directory where it is found can be determined by the background_dump_dest
initialization parameter: select value from v$parameter where name = ‘background_dump_dest’;
If the background_dump_dest parameter is not specified, Oracle will write the alert.log into the $ORACLE_HOME/RDBMS/trace directory.
26. What is the difference between connection and session?
A connection is a physical communication pathway between a client process and a database instance. A communication pathway is established using available interprocesses communication mechanisms or network software. Typically, a connection occurs between a client process and a server process or dispatcher, but it can also occur between a client process and Oracle Connection Manager (CMAN).
A session is a logical entity in the database instance memory that represents the state of a current user login to a database. For example, when a user is authenticated by the database with a password, a session is established for this user. A session lasts from the time the user is authenticated by the database until the time the user disconnects or exits the database application.
A single connection can have 0, 1, or more sessions established on it. The sessions are independent: a commit in one session does not affect transactions in other sessions.
27. How to enable the trace at a session and what you will get from it?
Enabling Trace at Instance Level
Trace can be enabled when the instance is started by adding the following line to the init.ora file
sql_trace = TRUE
This will enable trace for all sessions including background processes. Note that enabling this parameter may generate large amounts of trace and consume significant system resources
When the instance is already running, trace can be enabled directly using the ALTER SYSTEM command.
ALTER SYSTEM SET trace_enabled = TRUE;
This will enable trace for all newly created sessions. Currently executing sessions and background processes will be unaffected.
Instance-wide trace can be disabled again using
ALTER SYSTEM SET trace_enabled = FALSE;
Enabling Trace at Session Level:
Trace can be enabled at session level using the command :
ALTER SESSION SET sql_trace = TRUE;
Trace is disabled at session level using
ALTER SESSION SET sql_trace = FALSE;
Trace is enabled at session level using
EXECUTE dbms_session.set_sql_trace (TRUE);
Trace is disabled at session level using
EXECUTE dbms_session.set_sql_trace (FALSE);
28. What is the advantage of RMAN?
- Ability to perform INCREMENTAL backups
- Ability to recover one block of data file.
- Ability to automatically backup CONTROLFILE and SPFILE
- Ability to delete the older ARCHIVE REDOLOG files, with the new one’s automatically.
- Ability to perform backup and restore with parallelism.
- Ability to report the files needed for the backup.
- Ability to RESTART the failed backup, without starting from beginning.
- Much faster when compared to other TRADITIONAL backup strategies.
- Unused block compression: In unused block compression, RMAN can skip data blocks that have never been used.
- No redo log overhead
- Corrupt block detection: RMAN checks for the block corruption before taking its backup.
- Maintain backup repository: Backups are recorded in the control file, which is the main repository of RMAN metadata. Additionally, you can store this metadata in a recovery catalog.
- Encrypted backups: RMAN uses backup encryption capabilities integrated into Oracle Database to store backup sets in an encrypted format.
29. What is instance?
A database instance is a set of memory structures that manage database files. A database is a set of physical files on disk created by the CREATE DATABASE statement. The instance manages its associated data and serves the users of the database.
Every running Oracle database is associated with at least one Oracle database instance. Because an instance exists in memory and a database exists on disk, an instance can exist without a database and a database can exist without an instance.
Instance is a combination of memory structure and process structure. Memory structure is SGA and Process structure is background processes.
30. What is block corruption and how to recover it?
SQL> select * from v$database_block_corruption;
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
———- ———- ———- —————— ———
4 1027 1 0 ALL ZERO
or we can the detail in alert.log file too regarding file# and block#.
We can either now recover the corrupted blocks using the command
BLOCKRECOVER DATAFILE 4, BLOCK 1027
connect target /
connect catalog rmancatalog/rmancatalog@catalog
run {
allocate channel…;
blockrecover datafile 13 block 443343;
release channel …;
}
Or, if there are a number of data blocks which are corrupted, we can issue a single command
BLOCKRECOVER CORRUPTION LIST
RMAN> blockrecover corruption list;
Yes, RMAN restored the block from the last backup and then RMAN applied the archive logs. I think it would be faster to restore one block instead of restoring the whole datafile as once that block is found it wouldn’t need to restore rest of the datafile. Oracle suggests using block media recovery is not useful in cases where the extent of data loss or corruption is not known; in this case, use datafile recovery instead.
31. What are the different mode of dataguard?
Synchronous transport (SYNC) is also referred to as “zero data loss” method because the LGWR is not allowed to acknowledge a commit has succeeded until the LNS can confirm that the redo needed to recover the transaction has been written at the standby site.
In the diagram to the right the phases of a transaction are
- The user commits a transaction creating a redo record in the SGA, the LGWR reads the redo record from the log buffer and writes it to the online redo log file and waits for confirmation from the LNS
- The LNS reads the same redo record from the buffer and transmits it to the standby database using Oracle Net Services, the RFS receives the redo at the standby database and writes it to the SRL
- When the RFS receives a write complete from the disk, it transmits an acknowledgment back to the LNS process on the primary database which in turns notifies the LGWR that the transmission is complete, the LGWR then sends a commit acknowledgment to the user
This setup really does depend on network performance and can have a dramatic impact on the primary databases, low latency on the network will have a big impact on response times. The impact can be seen in the wait event “LNS wait on SENDREQ” found in the v$system_event dynamic performance view.
Asynchronous transport (ASYNC) is different from SYNC in that it eliminates the requirement that the LGWR waits for a acknowledgment from the LNS, creating a “near zero” performance on the primary database regardless of distance between the primary and the standby locations. The LGWR will continue to acknowledge commit success even if the bandwidth prevents the redo of previous transaction from being sent to the standby database immediately. If the LNS is unable to keep pace and the log buffer is recycled before the redo is sent to the standby, the LNS automatically transitions to reading and sending from the log file instead of the log buffer in the SGA. Once the LNS has caught up it then switches back to reading directly from the buffer in the SGA.
The log buffer ratio is tracked via the view X$LOGBUF_READHIST a low hit ratio indicates that the LNS is reading from the log file instead of the log buffer, if this happens try increasing the log buffer size.
The drawback with ASYNC is the increased potential for data loss, if a failure destroys the primary database before the transport lag is reduced to zero, any committed transactions that are part of the transport lag are lost. So again make sure that the network bandwidth is adequate and that you get the lowest latency possible.
The protection modes are:
Maximum Performance: This mode requires ASYNC redo transport so that the LGWR process never waits for acknowledgment from the standby database, also note that Oracle no longer recommends the ARCH transport method in previous releases is used for maximum performance.
Note that you will probably lose data if the primary fails and full synchronization has not occurred, the amount of data loss is dependant on how far behind the standby is is processing the redo.
This is the default mode.
Maximum Availability: Its first priority is to be available its second priority is zero loss protection, thus it requires the SYNC redo transport. In the event that the standby server is unavailable the primary will wait the specified time in the NET_TIMEOUT parameter before giving up on the standby server and allowing the primary to continue to process. Once the connection has been re-established the primary will automatically resynchronize the standby database.
When the NET_TIMEOUT expires the LGWR process disconnects from the LNS process, acknowledges the commit and proceeds without the standby, processing continues until the current ORL is complete and the LGWR cycles into a new ORL, a new LNS process is started and an attempt to connect to the standby server is made, if it succeeds the new ORL is sent as normal, if not then LGWR disconnects again until the next log switch, the whole process keeps repeating at every log switch, hopefully the standby database will become available at some point in time. Also in the background if you remember if any archive logs have been created during this time the ARCH process will continually ping the standby database waiting until it come online.
You might have noticed there is a potential loss of data if the primary goes down and the standby database has also been down for a period of time and here has been no resynchronization, this is similar to Maximum Performance but you do give the standby server a chance to respond using the timeout.
Maximum Protection: The priority for this mode is data protection, even to the point that it will affect the primary database. This mode uses the SYNC redo transport and the primary will not issue a commit acknowledgment to the application unless it receives an acknowledgment from at least one standby database, basically the primary will stall and eventually abort preventing any unprotected commits from occurring. This guarantees complete data protection, in this setup it is advised to have two separate standby databases at different locations with no Single Point Of Failures (SPOF’s), they should not use the same network infrastructure as this would be a SPOF.
32. What happens during a switchover?
- Notifies the primary database that a switchover is about to occur
- Disconnect all users from the primary database
- Generate a special redo record that signals the End of Redo (EOR)
- Converts the primary database into a standby database
- Once the standby database applies the final EOR record, guaranteeing that no data loss has been lost, converts the standby database into the primary database.
The new standby database (old primary) starts to receive the redo records and continues process until we switch back again. It is important to remember that both databases receive the EOR record so both databases know the next redo that will be received.
34. Difference Between full backup and Level-0 backup
A level 0 incremental backup is physically identical to a full backup. The only difference is that the level 0 backup can be used as the parent for a level 1 backup, but a full backup will never be used as the parent for a level 1 backup
A level 0 incremental backup is physically identical to a full backup. The only difference is that the level 0 backup is recorded as an incremental backup in the RMAN repository, so it can be used as the parent for a level 1 backup. A full backup can never be part incremental backup strategy i.e it cannot be the parent for a subsequent incremental backup.
35. What is the advantage of datapump over traditional export, import?
1) Impdp/Expdp has self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.
2) Data Pump represents metadata in the dump file set as XML documents rather than as DDL commands.
3) Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance. Data Pump operations can take advantage of the server’s parallel processes to read or write multiple data streams simultaneously.
4) In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior.
5)Expdp/Impdp access files on the server rather than on the client.
6) Expdp/Impdp operates on a group of files called a dump file set rather than on a single sequential dump file.
7) When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.
8) Expdp/Impdp consumes more undo tablespace than original Export and Import.
9) If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.
10) Datapump has a very powerful interactive command-line mode which allows the user to monitor and control Data Pump Export and Import operations. Datapump allows you to disconnect and reconnect to the session.
Because Data Pump jobs run entirely on the server, you can start an export or import job, detach from it, and later reconnect to the job to monitor its progress.
11) Another neat development is the introduction of running impdp over network link (DB link). This feature totally eliminate the need of having a dedicated filesystem to store expdp dumps. It does the refresh over DB link.
12) In addition, REMAP_***** parameter in impdp is a bliss. Original exp do not have that and it was a pain to change tablespace for a schema if you are duplicating it as new one.
13) impdp vs imp : impdp provides a good option while replacing the data using table_exists_action.
14) The real beauty of Data Pump is NETWORK_LINK that allows you to import data directly into the schema of the remote user without accessing directory.
36. Difference between hot backup and rman bacukup?
Hot backup
- User managed backup.
- Everything we have to do manually.
- Full allocated blocks copy.
- More redo generation.
- We cannot get corrupted block information
RMAN Backup
- Server Managed backup.
- Everything done by Oracle server.
- Only changed block will copy.
- We can do parallelized.
- We can get corrupted block information.
- Less redo generation.
37. What is the difference between shared and dedicated server?
A dedicated server process, which services only one user process
A shared server process, which can service multiple user processes
Difference between dedicated and shared server configuration:
In order to communicate with oracle database, oracle users need a program such as SQL *Plus which can issue SQL statements and few processes which can execute these SQL statements. These processes are divided into User Process, Server Process and Background Processes. User process runs user application like SQL *Plus. Server process manages oracle user process’s requests. Server process executes SQL statements and returns result to user process. Background processes are the core of oracle database which handle over all database operations.
Dedicated server connection:
A dedicated process has one to one relationship with user process. The user connected with dedicated server connection has the dedicated resource that is whether or not the user is doing work the connection remains allocated to that user. Thus if your user is not working and if there is less network resource then this user is blocking another user to connect or to execute the user request by making this user works queued. This connection is preferable when you have lot of resource or to some administrative user like “SYS” , “SYS” for administrative works and for batch jobs,
Shared server connection:
Shared server architecture consists of Listener Process, Dispatcher Process, Request Queue, Shared server process and Response Queue. Network Listener process listens the user process request. If user process request requires a dedicated server process, listener process starts a dedicated server process. If the request can be assigned to a shared server process, then the request is forwarded to dispatcher process. Shared server configuration requires at least on dispatcher process. Dispatcher process places the request on request queue. Request Queue is created in SGA and shared by all dispatcher processes. On of free Shared server process picks up the request from request queue. After processing the request, shared server process places the result in response queue. Each dispatcher process has its own response queue in SGA. Dispatcher knows about the user process which placed the request, so the response is returned back to user process.
When there is shared server the user connection is shared, that is user is connecting by dispatchers and when a user is idle his resource can be used by other users, thus lessen the load on system, this is more likely in a environment where user is mostly idle, like in a order entry system.
38. How to configure shared server in oracle database?
SHARED_SERVERS Initialization Parameter : SHARED_SERVERS is the only parameter necessary to enable a shared server. Its value specifies that how many shared server are to be started.
Use following SQL statement to enable shared server.
Alter System Set SHARED_SERVERS = 5;
This command will start five shared servers, oracle database will also start a dispatcher process. Remember we did not configure any dispatcher. Oracle database will create dispatcher on its own.
To configure Shared server at database startup, use following setting in initialization parameter file
SHARED_SERVERS=5;
You can query V$SHARED_SERVER view to confirm changes.
MAX_SHARED_SERVERS Initialization Parameter
Oracle database keeps number of shared servers equal to SHARED_SERVERS parameter value. So SHARED_SERVER defines lower limit of shared server processes. PMON starts as many Shared servers as needed. But you can restrict upper limit by using MAX_SHARED_SERVERS.
For example, use following setting in parameter file to restrict maximum shared Server to 10;
MAX_SHARED_SERVERS= 10;
SHARED_SERVER_SESSIONS Initialization Parameter
Use this parameter to limit maximum concurrent shares server sessions. Value of this parameter should be less than SESSION initialization parameter. You may use this parameter to make sure that resources are always available for few dedicated server sessions.
DISPATCHERS Initialization Parameter:
DISPATCHERS parameter is used to create dispatcher processes. For example, use following setting in parameter file to create Dispatcher.
Use following setting to create 3 dispatchers
DISPATCHERS=”(PROTOCOL=TCP)(DISPATCHERS=3)”
Query V$DISPATCHER view to find out details about currently running dispatchers.
Client Side Setting for Shared Server:
When oracle database has been configured as Shares server, by default all connection will be entertained by Shared Server. Net service on client can be modified to use a shared Server or Dedicated server. Use SERVER=SHARED or SERVER=DEDICATED
39. What is the difference between undo tablespace and temporary tablespace?
Temporary tablespaces:
Temporary tablespaces are logical units within an Oracle database that are used for sort operations.
For e.g.: if you join two large tables, and Oracle cannot do the sort in memory (see SORT_AREA_SIZE initialization parameter), space will be allocated in a temporary tablespace for doing the sort operation. Other SQL operations that might require disk sorting are: CREATE INDEX, ANALYZE, Select DISTINCT, ORDER BY, GROUP BY, UNION, INTERSECT, MINUS, Sort-Merge joins, etc.
Oracle also uses temporary tablespaces to store global temporary tables.
In addition to SELECT statements with clauses like DISTINCT, ORDER BY, UNION, INTERSECT etc, Oracle might use temporary tablespace for creating indexes and analyzing objects.
The DBA should assign a temporary tablespace to each user in the database to prevent them from allocating sort space in the SYSTEM tablespace.
It is important to note that TEMPORARY TABLESPACES have volatile or non-persistent data. You don’t need to include temporary tablespaces in your backup policy and can just create them after you have performed a recovery.
Undo tablespaces:
Undo tablespaces contain before image of data, that can be used to rollback a particular transaction.
Broadly, undo tablespaces are used for
Roll back transactions when a ROLLBACK statement is issued
- Recover the database
- Provide read consistency
- Analyze data as of an earlier point in time by using Flashback Query
- Recover from logical corruptions using Flashback features
40. What is PCT free and PCT used block?
The PCTFREE parameter specifies the percentage of space in each data block that is reserved for growth resulting from updates of rows in that data block.
This parameter has a default value of 10 percent. For example, the value of the PCTFREE parameter is specified as 20 in a CREATE TABLE statement. This indicates that inserts to the block should stop as soon as free space drops to 20 percent or less. The free space thereafter can only be used for updates.
The PCTUSED parameter represents the minimum percentage of the used space that the Oracle server tries to maintain for each data block of the table. This parameter has a default value of 40 percent.
When a data block is filled to the limit determined by the value of the PCTFREE parameter, Oracle considers the block unavailable for the insertion of new rows. The block is unavailable for the insertion of new rows until the percentage filled by the data of that block falls below the value of the PCTUSED parameter. Until the percentage of the block falls below the value of the PCTUSED parameter, Oracle uses the free space of the data block only for updating the rows contained in the data block.
For example, if PCTUSED is defined as 40 percent, the block is reused for inserts as soon as utilization drops to less than 40 percent. Inserts continue until utilization reaches 80 percent and the cycle repeats.
41. What is PCT free and PCT used block?
The PCTFREE parameter specifies the percentage of space in each data block that is reserved for growth resulting from updates of rows in that data block.
This parameter has a default value of 10 percent. For example, the value of the PCTFREE parameter is specified as 20 in a CREATE TABLE statement. This indicates that inserts to the block should stop as soon as free space drops to 20 percent or less. The free space thereafter can only be used for updates.
The PCTUSED parameter represents the minimum percentage of the used space that the Oracle server tries to maintain for each data block of the table. This parameter has a default value of 40 percent.
When a data block is filled to the limit determined by the value of the PCTFREE parameter, Oracle considers the block unavailable for the insertion of new rows. The block is unavailable for the insertion of new rows until the percentage filled by the data of that block falls below the value of the PCTUSED parameter. Until the percentage of the block falls below the value of the PCTUSED parameter, Oracle uses the free space of the data block only for updating the rows contained in the data block.
For example, if PCTUSED is defined as 40 percent, the block is reused for inserts as soon as utilization drops to less than 40 percent. Inserts continue until utilization reaches 80 percent and the cycle repeats.
FREE LIST:
Each segment has one or more freelist (in the segment header) that keeps track of data blocks under the high-water mark.
When records are inserted into the segment, Oracle examines the freelist to determine which data blocks have available space to store the new rows. When rows are deleted (or updated), and the amount of free space in the data block falls below a threshold value (PCTUSED), the data block will be returned to the freelist.
42. What happen during index rebuilt?
When you use an online index rebuild Oracle has to create a log of all changes you make to the table while the rebuild is running. Once the log is set up, Oracle creates a new index using a read-consistent copy of the data in the table, then copies the logged changes into this index, and finishes off by tidying up the data dictionary, dropping temporary segments, and dropping the log.
This process locks the table twice, by the way, once while getting started and creating the log, and then again when finishing off and dopping the log. If you read Richard Foote’s blog regularly you will know that these locks are only held for a short time but can be hard to acquire and may lead to a randomly long delay in the rebuild.
For my purposes, though, the critical feature is the object that holds the logging information. This is created as an index organized table (IOT) with a name like sys_journal_NNNNN where NNNNN is the object_id of the index you are trying to rebuild. (You can check this by enabling sql_trace just before doing a rebuild.)
Nice post!! One of the problems with online rebuild is that if you are trying to do online rebuild for table with high dml activity and happen to cancel the online rebuild (Ctrl+C),then the Journal table does not get dropped as Smon is unable to clear it (due to high DML on base table). Due to this you won’t be able to drop or rebuild index (not online) and get a error message that “online rebuild in progress”. You will have to wait till smon clears it. . Slightly off topic from your post but wanted to highlight to readers that they should use online rebuild option judiciously and try never to cancel it.
Why rebuilding an index has been so beneficial is because the index is now so much smaller after the rebuild that the overheads of reading the index have substantially reduced. If the index is smaller, it means one can read the same amount of index related data with less I/Os. Less I/Os means better performance.
During the creation or rebuilding of an index online, Oracle still requires two associated table locks on the base table at the start and end of indexing process. If there’s an active transaction on the base table at the time one of these locks is required, the indexing process will still hang as its done previously until all these prior active transactions have completed. No change so far.
However, if the indexing process has been locked out and subsequent transactions relating to the base table start afterwards, these transactions will no longer in turn be locked out by the indexing table locks and are able to complete successfully. The indexing process no longer impacts other concurrent transactions on the base table, it will be the only process potentially left hanging while waiting to acquire its associated lock resource.
This means it may not be quite so “risky” to urgently introduce that new index or rebuild that troublesome index during core business hours due to the reduced locking implications introduced in 11g.
43. What is the content of sqlnet.ora, tnsnames.ora and listner.ora file?
In its most basic form, Oracle uses three files (listener.ora, tnsnames.ora & sqlnet.ora) for network configuration.
LISTENER.ORA
The listerner.ora file contains server side network configuration parameters. It can be found in the ORACLE_HOME/network/admin directory on the server. Here is an example of a listener.ora file from Windows 2000:
# LISTENER.ORA Network Configuration File: C:\Oracle\817\network\admin\listener.ora
# Generated by Oracle configuration tools.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))
)
)
(DESCRIPTION =
(PROTOCOL_STACK =
(PRESENTATION = GIOP)
(SESSION = RAW)
)
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 2481))
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = C:\Oracle\817)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = ORCL.WORLD)
(ORACLE_HOME = C:\Oracle\817)
(SID_NAME = ORCL)
)
)
After the listener.ora file is amended the listener should be restarted or reloaded to allow the new configuation to take effect:
C:> lsnrctl stop
C:> lsnrctl start
C:> lsnrctl reload
TNSNAMES.ORA
The tnsnames.ora file contains client side network configuration parameters. It can be found in the ORACLE_HOME/network/admin or ORACLE_HOME/net80/admin directory on the client. This file will also be present on the server if client style connections are used on the server itself. Basically the listener.ora file is used on a server, and it helps make the connections between client requests and the database instance(s) on the server. The tnsnames.ora file is used on the client, and it points to a particular entry in a listener.ora file on a server.
Here is an example of a tnsnames.ora file from Windows 2000:
# TNSNAMES.ORA Network Configuration File: C:\Oracle\817\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.
ORCL.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = ORCL.WORLD)
)
)
INST1_HTTP.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = SHARED)
(SERVICE_NAME = MODOSE)
(PRESENTATION = http://admin)
)
)
EXTPROC_CONNECTION_DATA.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)
SQLNET.ORA
The sqlnet.ora file contains client side network configuration parameters. It can be found in the ORACLE_HOME/network/admin or ORACLE_HOME/net80/admin directory on the client. This file will also be present on the server if client style connections are used on the server itself. Here is an example of an sqlnet.ora file from Windows 2000:
# SQLNET.ORA Network Configuration File: C:\Oracle\817\network\admin\sqlnet.ora
# Generated by Oracle configuration tools.
SQLNET.AUTHENTICATION_SERVICES= (NTS)
NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)
NAMES.DEFAULT_DOMAIN = WORLD
If a tnsnames.ora file has to work properly , then we need the sqlnet.ora file. The sqlnet.ora file
contains some information which the user process(the user connection) needs ,to find exactly where
a database server is located. So the user process normally reads the tnsnames.ora file and then the
sqlnet.ora file and thus will find the complete database server information, database name etc and
will talk to the listener which is present on the database server. The listener will then will see that a
conenction is opened to the database.
44. Does dictionary cache contain dynamic view?
“Data dictionary” in oracle database is the component which keeps information about database. Data Dictionary is collection of read-only tables. These tables are maintained by oracle database. Data inside these tables is modified when DDL commands are issued. This data is kept in encrypted form and should not be modified by any user. Data dictionary holds detailed information about Schema objects structure, space allocation, user privileges, auditing information and default vales of columns etc.
Structure of Data Dictionary Cache:
Data dictionary is organized into Base Tables and User-accessible view. These tables and views are stored in SYSTEM tablespace and owned by oracle user sys. As system tablespace is always online, so data dictionary is always available.
As the data in Base Tables is in encrypted format so users cannot and should not directly access information from these tables. Oracle has provided certain view to fetch information. These views decode encrypted data and provide us useful information. These views change with every new oracle release.
Oracle also contains a set of dynamic performance views. These views are continuously updated and contain latest information about database. These views are created in RAM and available even if the database in not open. The information from dynamic performance view is frequently accessed during database recovery process. For example, if you open database in no-mount stage, at this stage DBA_TBLESPACES view is not available as database is not open but you can query V$TABLESPACE dynamic view to get information about tablespaces.
Dynamic performance view start with V$ and GV$. While other Data Dictionary views start with USER, ALL or DBA prefix.
Oracle Dictionary Cache:
Dictionary Cache is place in Shared Pool which contains Data Dictionary. Oracle frequently requires Data Dictionary. Most parts of Data Dictionary are cached into Dictionary Cache. Oracle utilizes Dictionary Cache information in query parsing.
Dictionary cache is also called Row Cache as data inside Dictionary Cache is maintained into rows instead of buffer.
45. What is the differences between static and dynamic listener registration?
1) The relevant one is that – if you want to remotely startup the database, you need to have a statically registered connect string, otherwise – since the database isn’t up – it wouldn’t be registered.
2) Use dynamic registration for all application connects – all of them. Use static only if and when you need to remotely start the database over the network.
RMAN Concepts:
1. RMAN keeps a record of all target databases that are registered with catalog database.
2. When RMAN is invoked to backup or recover a database, it verifies the largest database with stored database if available in the RMAN Catalog.
3. If its matches RMAN establishes two server session with target database.
4. A channel is a remote procedural call to the target database using pl/sql interace to perform backup and recovery operations.
5. RMAN Compiles the command and executed it on the target database with the summary of actions. Backedup to these files are called backup pieces. To backup to tape drive/tape library we need the media manager.
6. Execution of every RMAN backup command produces a backup set that is a logical grouping of one more physical files called backup pieces.
7. RMAN Backup s going to be content backup not mirror image but only used blocks are backed up. Thus RMAN is smaller and much faster when compared to other backups.
Advantages:
1. Centralized backup and recovery and this recovery is across enterprise database.
2. Incremental backups –only those blocks which are changed since last backup
3. Corrupt block identification
4. Automated scheduled backups
5. Cross check – Once backup is over again checking the consistency of original files.
6. Can generate logs on backup and recovery.
7. Can report the summary information.
8. Can store preconfigured scripts which can be run at any time.
9. A log can specify to the backup file / files set which can be used even at the time of restore instead of filename.
10. Tablespace point in time recovery is possible .
11. Create duplicate database of production database for developers /QA people/
12. Multiple destinations can be mentioned.
13. Cleanup procedures can be implemented after performing RMAN successful backup. let’s get rid of obsolete archive log files.
Pre –requirements:
1. Every Target must be registered in the catalog database
2. In the catalog database a separate tablespace must be created to hold catalog schema owner.
3. A user should be created in the catalog database with connect, resource, recovery_catalog_owner privileges.
Block Change Tracking
As data blocks changes, the background process change tracking writer (CTWR) tracks all the changes made to the data block in a private area of memory. When we issue commit on these changes, the block change tracking information is copied to a shared area in large pool called the CTWR buffer. CTWR process writes the information from the CTWR RAM buffer to the change tracking file. Without BCT enabled or in cases when change tracking information cannot be used, RMAN has to read every block in the datafile during incremental backup. Each block contains last system change number (SCN) of its last modification. The block is copied only if its SCN is higher or equal to the base backup SCN. To be precise, the backup SCN is a checkpoint SCN that was made right before the backup. Since, usually, only handful of blocks is changed between incremental backups, RMAN does a lot of useless work reading the blocks not required for backup.
Block change tracking provides a way to identify the blocks required for backup without scanning the whole datafile. After that RMAN need only read blocks that are really required for this incremental backup.
How to Enable/Disable Block change tracking?
Enable
Syntax: SQL>Alter database enable block change tracking (syntax when OMF is enabled)
Or
Alter database enable block change tracking using file os_file_name;
Using os_file_name syntax allows you to define the location of the change tracking file on the OS or you can omit this using OMF (oracle Managed files)
By default the change tracking file is created as an Oracle Managed File (OMF) in the location pointed to by the DB_CREATE_FILE_DEST parameter.
Disable
SQL>alter database disable block change tracking;
How to monitor the status of block change tracking?
SQL>select filename, status, bytes from v$block_change_tracking
How can we view the size of CTWR dba buffer?
Select * from v$sgastat where name like ‘CTWR%’;
Incremental Backups using RMAN:
Incremental Backups: RMAN incremental backups back up only data file block that have been changed since last backup. One can make incremental backups at Database level, Tablespace level or datafile level.
The main theme of incremental backup is to back up only those data blocks that have been changed since previous backup.
The reasons for going incremental backup as a part of our backup strategy are:
1. To reduce the amount of time needed for taking backups.
2. To save network bandwidth when backing up over network.
3. For better backup performance.
4. To reduce backup size for Non-Archive log databases, instead of going with full database backup one can go with incremental backups.
Incremental Backup Algorithm:
Data is stored in the data blocks. Each data block in a data file contains system change number SCN, which is the SCN at which most recent changes are made to the block. During incremental backup RMAN reads the SCN of each data block in the input file and compares this with the checkpoint SCN of the parent incremental backup. If the SCN in the input data block Is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the block.
Level 0 and Level 1 Backups:
Level 0: Level 0 is just like a full backup which is the base of the subsequent incremental backups, copies all blocks containing data, backing the data file up into a backup set.
The main difference between the full backup and incremental level 0 backups is , full backup is never included in the incremental backup strategy.
Level 1: With reference to the previous level 0 backup or level 1 backup it backs up the modified data blocks.
Types of Level 1 incremental backups:
Differential Backup: Backus up all blocks changed after the most recent incremental backup at level 1 or 0
Cumulative backup: Backups up all blocks changed after the most recent incremental backup at level 0
By default incremental backups are differential.
RMAN> Backup Incremental level 1 Database; (example of differential backup)
RMAN> Backup Incremental level 1 cumulative database; (blocks changed since level 0- Cumulative Incremental Backups.
Incremental Backup Strategy:
Choose a backup schema according to an acceptable MTTR (mean time to recover). For example, you can implement a three –level backup schema so that a full or level 0 backup is taken monthly, a cumulative level 1 is taken weekly, and a differential level 1 is taken daily.
How often we can take full or incremental backups?
When deciding how often we can take full or incremental backups, a good rule of thumb is to take a new level 0 backup whenever 50% or more of the data has changed. If the rate of change to your database is predictable, then you can observe the size of your incremental backups to determine new level 0 is appropriate.
The following query will help you to determine the rate of changed blocks.
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0
AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;
Making Incremental Backups : Backup Incremental
After starting RMAN, run the BACKUP INCREMENTAL command at the RMAN prompt. This example makes a level 0 incremental backup of the database:
BACKUP INCREMENTAL LEVEL 0 DATABASE;
This example makes a differential level 1 backup of the SYSTEM tablespace and datafile tools01.dbf. It will only back up those data blocks changed since the most recent level 1 or level 0 backup:
BACKUP INCREMENTAL LEVEL 1
TABLESPACE SYSTEM
DATAFILE ‘ora_home/oradata/trgt/tools01.dbf’;
This example makes a cumulative level 1 backup of the tablespace users, backing up all blocks changed since the most recent level 0 backup.
BACKUP INCREMENTAL LEVEL = 1 CUMULATIVE
TABLESPACE users;
ASM :
Automatic Storage Management (ASM) is oracle’s logical volume manager, it uses OMF (Oracle Managed Files) to name and locate the database files. It can use raw disks, filesystems or files which can be made to look like disks as long as the device is raw. ASM uses its own database instance to manage the disks, it has its own processes and pfile or spfile, it uses ASM disk groups to manage disks as one logical unit.
Cluster Synchronization Services Requirements for ASM
The Cluster Synchronization Services (CSS) daemon provides cluster services for ASM, communication between the ASM and database instances, and other essential services. When DBCA creates a database, the CSS daemon is usually started and configured to start upon restart. If DBCA created the database, then you must ensure that the CSS daemon is running before you start the ASM instance.
ASM registers its name and disks with the RDBMS via the cluster synchronization service (CSS). This is why the oracle cluster services must be running, even if the node and instance is not clustered. The ASM must be in mount mode in order for a RDBMS to use it and you only require the instance type in the parameter file.
ASM must be in mount mode in order for a RDBMS to use it and you only require the instance type in the parameter file.
The INSTANCE_TYPE
initialization parameter is the only required parameter in the ASM instance parameter file. The ASM* parameters use suitable defaults for most environments. You cannot use parameters with names that are prefixed with ASM* in database instance parameter files.
ASM Parameters:
ASM_DISKGROUPS
The ASM_DISKGROUPS initialization parameter specifies a list of the names of disk groups that an ASM instance mounts at startup. Oracle ignores the value that you set for ASM_DISKGROUPS when you specify the NOMOUNT option at startup or when you issue the ALTER DISKGROUP ALL MOUNT statement. The default value of the ASM_DISKGROUPS parameter is a NULL string. If the parameter value is NULL or is not specified, then ASM does not mount any disk groups.
SQL> ALTER SYSTEM SET ASM_DISKGROUPS = ‘CONTROLFILE, DATAFILE, LOGFILE, STANDBY’;
ASM_DISKSTRING
The ASM_DISKSTRING
initialization parameter specifies a comma-delimited list of strings that limits the set of disks that an ASM instance discovers. The discovery strings can include wildcard characters. Only disks that match one of the strings are discovered. The same disk cannot be discovered twice.
The default value of the ASM_DISKSTRING
parameter is a NULL
string. A NULL
value causes ASM to search a default path for all disks in the system to which the ASM instance has read and write access. The default search path is platform-specific. Refer to your operating system specific installation guide for more information about the default search path.
ASM cannot use a disk unless all of the ASM instances in the cluster can discover the disk through one of their own discovery strings. The names do not need to be the same on every node, but all disks must be discoverable by all of the nodes in the cluster. This may require dynamically changing the initialization parameter to enable adding new storage.
ASM_POWER_LIMIT
The ASM_POWER_LIMIT
initialization parameter specifies the default power for disk rebalancing. The default value is 1
and the range of allowable values is 0
to 11
inclusive. A value of 0
disables rebalancing. Higher numeric values enable the rebalancing operation to complete more quickly, but might result in higher I/O overhead.
ASM_PREFERRED_READ_FAILURE_GROUPS
The ASM_PREFERRED_READ_FAILURE_GROUPS
initialization parameter value is a comma-delimited list of strings that specifies the failure groups that should be preferentially read by the given instance. This parameter is generally used only for clustered ASM instances and its value can be different on different nodes.
The ASM_PREFERRED_READ_FAILURE_GROUPS
parameter setting is instance specific. This parameter is only valid for clustered ASM instances and the default value is NULL
.
DB_CACHE_SIZE
You do not need to set a value for the DB_CACHE_SIZE
initialization parameter if you use automatic memory management.
The setting for the DB_CACHE_SIZE
parameter determines the size of the buffer cache. This buffer cache is used to store metadata blocks. The default value for this parameter is suitable for most environments.
DIAGNOSTIC_DEST
The DIAGNOSTIC_DEST
initialization parameter specifies the directory where diagnostics for an instance are located. The value for an ASM instance is of the form:
diagnostic_dest/diag/asm/
db_name/
instance_name
For an ASM instance, db_name defaults to +asm
.
INSTANCE_TYPE
The INSTANCE_TYPE
initialization parameter must be set to ASM
for an ASM instance. This is a required parameter and cannot be modified. The following is an example of the INSTANCE_TYPE
parameter in the initialization file:
INSTANCE_TYPE
=
ASM
LARGE_POOL_SIZE
You do not need to set a value for the LARGE_POOL_SIZE
initialization parameter if you use automatic memory management.
The setting for the LARGE_POOL_SIZE
parameter is used for large allocations. The default value for this parameter is suitable for most environments.
PROCESSES
You do not need to set a value for the PROCESSES
initialization parameter if you use automatic memory management.
The PROCESSES
initialization parameter affects ASM, but generally you do not need to modify the setting. The default value provided is usually suitable.
REMOTE_LOGIN_PASSWORDFILE
The REMOTE_LOGIN_PASSWORDFILE
initialization parameter specifies whether the ASM instance checks for a password file. This parameter operates the same for ASM and database instances.
SHARED_POOL_SIZE
You do not need to set a value for the SHARED_POOL_SIZE
initialization parameter if you use automatic memory management.
The setting for the SHARED_POOL_SIZE
parameter determines the amount of memory required to manage the instance. The setting for this parameter is also used to determine the amount of space that is allocated for extent storage. The default value for this parameter is suitable for most environments.
The main parameters in the instance parameter file will be:
- instance_type – you have two types RDBMS or ASM
- instance_name – the name of the ASM instance
- asm_power_limit – maximum speed of rebalancing disks, default is 1 and the range is 1 – 11 (11 being the fastest)
- asm_diskstring – this is the location were oracle will look for disk discovery
- asm_diskgroups – diskgroups that will be mounted automatically when the ASM instance is started.
Disk Group Attributes
AU_SIZE
The contents of ASM files are stored in a disk group as a set, or collection, of data extents that are stored on individual disks within disk groups. Each extent resides on an individual disk. Extents consist of one or more allocation units (AU). To accommodate increasingly larger files, ASM uses variable size extents.
The ASM coarse striping is always equal to the disk group AU size, but fine striping size always remains 128KB in any configuration (not shown in the figure). The AU size is determined at creation time with the allocation unit size (AU_SIZE) disk group attribute. The values can be 1, 2, 4, 8, 16, 32, and 64 MB.
COMPATIBLE
.ASM
The value for the disk group COMPATIBLE
.ASM
attribute determines the minimum software version for an ASM instance that uses the disk group. This setting also determines the format of the data structures for the ASM metadata on the disk. The format of the file contents is determined by the database instance. For ASM in Oracle Database 11g, 10.1
is the default setting for the COMPATIBLE.ASM
attribute.
COMPATIBLE
.RDBMS
The value for the disk group COMPATIBLE.RDBMS
attribute determines the minimum COMPATIBLE
database initialization parameter setting for any database instance that uses the disk group. For example, if the database COMPATIBLE
initialization parameter is set to 11.1.0
, then COMPATIBLE.RDBMS
can be set to any value between 10.1
and 11.1
inclusively. For ASM in Oracle Database 11g, 10.1
is the default setting for the COMPATIBLE
.RDBMS
attribute
- DISK_REPAIR_TIME
Restoring the redundancy of an ASM disk group after a transient disk path failure can be time consuming. This is especially true if the recovery process requires rebuilding an entire ASM failure group. ASM fast mirror resync significantly reduces the time to resynchronize a failed disk in such situations. When you replace the failed disk, ASM can quickly resynchronize the ASM disk extents.
The default DISK_REPAIR_TIME attribute value is an estimate that should be adequate for most environments. However, make sure that the attribute value is set to the amount of time that you think is necessary in your environment to fix any transient disk error and that you are willing to tolerate reduced data redundancy.
ALTER DISKGROUP dg01 SET ATTRIBUTE ‘disk_repair_time’ = ‘4.5h’
ALTER DISKGROUP dg01 SET ATTRIBUTE ‘disk_repair_time’ = ‘270m’
ASM Background Processes
The following background processes are an integral part of Automatic Storage Management:
- ARBn performs the actual rebalance data extent movements in an Automatic Storage Management instance. There can be many of these processes running at a time, named ARB0, ARB1, and so on.
- ASMB runs in a database instance that is using an ASM disk group. ASMB communicates with the ASM instance, managing storage and providing statistics. ASMB can also run in the ASM instance. ASMB runs in ASM instances when the ASMCMD
cp
command runs or when the database instance first starts if the SPFILE is stored in ASM. - GMON maintains disk membership in ASM disk groups.
- MARK marks ASM allocation units as stale following a missed write to an offline disk. This essentially tracks which extents require resync for offline disks.
- RBAL runs in both database and ASM instances. In the database instance, it does a global open of ASM disks. In an ASM instance, it also coordinates rebalance activity for disk groups.
WHY ASM ?
- Adding or removing storage to/from ASM is very easy and does not require careful planning (as is the case with volume managers and file systems). After adding storage, ASM will automatically “rebalance” storage so all disks will be utilized equally. This again increases performance.
- Provides automatic load balancing over all the available disks, thus reducing hot spots in the file system.
- Prevents fragmentation of disks, so you don’t need to manually relocate data to tune I/O performance
- Uses redundancy features available in intelligent storage arrays
- ASM provides stripping and mirroring (fine and coarse gain – see below)
- ASM works on all major operating systems so it is platform independent.
- Using disk group makes configuration easier, as files are placed into disk groups
- The storage system can store all types of database files.
- ASM is free!!!!!!!!!!!!!
- Using disk group makes configuration easier, as files are placed into disk groups
- Improves performance.
There are three forms of Mirroring :
- External redundancy – doesn’t have failure groups and thus is effectively a no-mirroring strategy
- Normal redundancy – provides two-way mirroring of all extents in a disk group, which result in two failure groups
- High redundancy – provides three-way mirroring of all extents in a disk group, which result in three failure groups
- Not Managed by ASM – Oracle binaries, alert log, trace files, init.ora or password file
- Managed by ASM – Datafiles, SPFILES, redo log files, archived log files, RMAN backup set / image copies, flash recovery area.
RMAN backup
RMAN is the only way to backup ASM disks.
ASM Stripping
ASM stripes files across all the disks within the disk group thus increasing performance, each stripe is called an ‘allocation unit’. ASM offers two types of stripping which is dependent on the type of database file
Coarse Stripping |
used for datafile, archive logs (1MB stripes) |
Fine Stripping |
used for online redo logs, controlfile, flashback files(128KB stripes) |
ASM file management has a number of good benefits over normal 3rd party LVM’s
- performance
- redundancy
- ease of management
- security
Performance Issues :
1. What are the wait events?Give any 5 wait events functionality?
a.db file scattered read: full table scan scenario :
b.log file sync: when the logfile size is small and frequent commit are coming. so we should give commit after 10,000 rows update.:
c. db_file sequential read…..while index scan is going on when more physical for index. we check systat for physical IO, then we use caluse like where :
Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
d.buffer busy wait : when there is less buffer in buffer cache.: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH
e.latch: memory lock….when there space left between two words.
f. log buffer spaces : Increase LOG_BUFFER parameter or move log files to faster disks
- 2. What are the contents of tkprof? give one description?
The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can user TKPROF to get more information.
TKPROF reports each statement executed with the resources it has consumed, the number of times it was called, and the number of rows which it processed. This information lets you easily locate those statements that are using the greatest resource. With experience or with baselines available, you can assess whether the resources used are reasonable given the work done
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
Things to look out for in the trace include:
•When tracing lots of statements at once, such as batch processes, you can quickly discard those statements which have an acceptable cpu times. It’s often better to focus on those statements that are taking most of the cpu time.
•Inefficient statements are mostly associated with a high number of block visits. The query column indicates block visits for read consistency, including all query and subquery processing. The current column indicates visits not related to read consistency, including segment headers and blocks that are going to be updated.
•The number of blocks read from disk is shown in the disk column. Since disk reads are slower than memory reads you would expect this value to be significantly lower than the sum of the query and current columns. If it is not you may have an issue with your buffer cache.
•Locking problems and inefficient PL/SQL loops may lead to high cpu/elapsed values even when block visits are low.
•Multiple parse calls for a single statement imply a library cache issue.
- 3. What are all the steps you will do when a query is too slow?
We will verify if it was running fine before. From when it started running slow.
1. check is there any database related changes happened.(like any new PSU applied or any upgrade or any invalid objects).
whether any database upgrades are done recently or any patches applied to higher version. Same queries with same data will run very slow in newer versions when upgrade from lower version. Then you might run with database CBO Bug. Contact Oracle support for more details regarding this bug.
2. Statistics are fine or not ?? check for any massive data loads done during last past night, are there any code changes, when was the last time the tables got analyzed, check the tables in the queries for proper indexes creation, check for tables, indexes and other objects for recent statistics, avoid using database functions, avoid using full table scan as much as possible, create partitions if necessary for faster access , use merge statements for huge updates and deletes, and so on. Check whether database statistics ran after database load. Simple way to run statistics is using DBMS_STAT.GATHER_TABLE_STATS .
3. I/O Issue: Sometimes queries will be very slow due to more I/O from database to client.
Mechanism to avoid disk I/O and make the physical reads and writes more efficient
First get the largest amount of physical reads and logical reads and the simple way to get them are:
For Physical Reads:
Select disk_reads, sql_text from v$sqlarea where disk_reads > 10000 order by disk_reads desc;
For Logical Reads:
Select Buffer_gets, sql_text from v$sqlarea where buffer_gets > 200000 order by Buffer_gets desc;
4. Read Hit Ratio :
98.99
Hit ratio below 90-95% is signed as poor indexing;
Check the Shared Pool Size:
It is important to look at the shared_pool_size for proper sizing. With a greater amount of procedures, packages and triggers being utilized with Oracle, the shared_pool_size makes up a much greater portion of the Oracle SGA. This is the memory allocated for the library and data dictionary cache. If the shared_pool_size is set too low then you will not get the full advantage of your DB_BLOCK_BUFFERS.
How to find the data dictionary cache miss ratio?
Select sum(gets) “Gets”, sum(getmisses) “Misses”, (1-(sum(getmisses)/(sum(gets)+ sum(getmisses)))) * 100 “HitRate”
From gv$rowcache;
Gets Misses HitRate
11000 558 96.5
This is a good ratio does not require any further action.
How to find library Cache Hit Ratio?
select sum(pins) Executions, sum(pinhits) “Execution Hits”, ((sum(pinhits)/sum(pins)) * 100)phitrat, sum(reloads)Misses, ((Sum(pins)/(sum(pins) + sum(reloads))) * 100) hitrat from gv$librarycache;
2. check if any object lock in that particular table or session blocking.
3. check the excution plan. wheteher its same as before ? is it using INDEX ?
4. Tune the sql query using sql adviser and see the oracle recommedations.
- What happen during one sql execution?
1. Syntax and sematic parse done.
2. Execution plan will be created by query optimizer and it read the dba_tab_statistics.
3. Execute
Simple Oracle transaction
1. User requests a connection to oracle
2. A new dedicated server process is started for the user
3. User executes a statement to insert data in to a table
4. Oracle checks the users privileges, it first checks the library cache (cache hit) for the information and if not found retrieve it from disk.
5. Check to see if the SQL statement has been parsed before (library cache) if it has then this is called a soft parse, otherwise the code has to be compiled a hard parse.
6. Oracle creates a private SQL area in the user’s session’s PGA
7. Oracle checks to see if the data is in the buffer cache, otherwise perform a read from the data file
8. Oracle will then apply row-level locks where needed to prevent others changing the row (select statements are still allowed on the row)
9. Oracle then writes the change vectors to the redo log buffer
10. Oracle then modifies the row in the data buffer cache
11. The user commits the transaction making it permanent; the row-level locks are released
12. The log writer process immediately writes out the changed data in the redo log buffers to the online redo log files, in other words the change is now recoverable.
13. Oracle informs the user process that the transaction was completed successfully
14. It may be sometime before the data buffer cache writes out the change to the data files.
Note: if the users transaction was an update then the before update row would have been written to the undo buffer cache, this would be used if the user rolls back the change of if another user run’s a select on that data before the new update was committed.
- How block transition happen during roll back between undo tablespace and database buffer cache?
Undo table space store before image
Oracle uses rollback segment information to undo any uncommitted transactions. Because rollback segments are stored in the database buffers.
The undo data which gets generated during transactions (example in the above scenario) will be stored for some time. If there is no space left in memory this undo data is flushed to undo segments in undo tablespace and it will reside in undo tablespace for a minimum time period defined by UNDO_RETENTION init.ora paramter. This undo data is used during rolling back a transaction. Please note that for rolling back a transaction redo log files are never used. Redo logs and archive logs are referred (or read) only during recovery of database. During all other time the data is written to redo logs files but never read from it. So for rolling back the transaction undo data saved in undo tablespace is required.
- Can we change the port number of 1251?If yes how?
Yes we can change it in listener.ora file.
netstat -lnap|grep 3872 (to see if the port is free or not).
- What happens during statics gather?
All the table and Index info will get updated to the dba_tab_statistics.
• Flush database monitoring info from SGA to dba_tab_modifications view
• Gather statistics for tables that cumulative modifications compared to the dba_tables.num_rows are more than Modification Threshold percents.
• Gather statistics for table that never analyzed (last_analyzed in NULL)
• Gather statistics for tables which does not have monitoring enabled and afterwards enable monitoring for them so we have data in dba_tab_modifications next time we Gather Statistics.
This feature monitors DML operations – insert/update/delete (and truncate) on monitored tables.
Table monitoring can be implemented on a table by executing the following statement:
“alter table TABLE_NAME monitoring”.
The advantage of this feature is when gathering statistics.
Based on the dba_tab_modifications view we can gather statistics for only tables that have more than X% changes (compared to num_rows in user_tables) since last gather statistics.
Actually, there is no reason for us to re-analyze tables that has no enough changes.
Where are the statistics stored?
Statistics are stored into the Oracle Data Dictionary, in tables owned by SYS. Views are created on these tables to retrieve data more easily.
These views are prefixed with DBA_ or ALL_ or USER_. For ease of reading, we will use DBA_% views, but ALL_% views or USER_% views could be used as well.
- CPU utilization is more in a server, What would be your action to reduce the utilization?
http://orachat.com/cpu-consumption-oracle/
1. Verify the observation of high CPU consumption on the affected server and collect top and vmstat data for a period of time (from 30 minutes up to a few hours).
2.Find which processes are using the most CPU.(using TOP command).
This command is useful because it shows at a glance the overall CPU statistics and the processes consuming the most CPU (ranked in descending order of CPU usage).
3.If the top CPU consumers are Oracle processes, then find out which database sessions correspond to these processes.(using PS command).
The ps command shows the accumulated CPU time for each process as well as the full name for the process associated with the process ID (PID).
- 9. What is the db file sequential read and db file scatter read events?
db file sequential read waits: A sequential read wait occurs within an Oracle database when a single block is read. A single read is most commonly an index probe by ROWID into an individual table, or the access of an index block. Sequential reads are single-block reads, as opposed to multiblock (scattered) reads.
db file scattered read waits: Scattered read waits occurs when multiblock I/O is invoked. When the Oracle Database performs a full-table scan or sort operation, multiblock block read is automatically invoked.
Solutions to Physical Read Waits :
When we have identified the objects that experience the physical read waits, we can use Statspack to extract the SQL associated with the waits and take the following actions to correct the problem. These corrective actions are presented in the order in which they are most likely to be effective, and some may not apply to your environment.
Tune the SQL: This is the single most important factor in reducing disk waits. If an SQL statement can be tuned to reduce disk I/O (for example, by using an index to remove an unnecessary large-table full-table scan), then the amount of disk I/O and associated waits are dramatically reduced. Other SQL tuning might include:
Change table join order: For sequential read waits, the SQL may be tuned to change the order that the tables are joined (often using the ORDERED hint)
Change indexes: You can tune the SQL by adding function-based indexes or using an INDEX hint to make the SQL less I/O-intensive by using a more selective index.
Change table join methods: Often, nested loop joins have fewer I/O waits than hash joins, especially for sequential reads. You can change table join methods with SQL hints (USE_NL, for example). If you are not yet using Oracle9i with pga_aggregate_target, you can change the propensity for hash join by adjusting the hash_area_size parameter.
Re-schedule contentious SQL: After you have identified the regular trends of repeating disk waits, you can often reschedule the execution of the SQL at another time, thereby relieving the physical waits.
Re-analyze schema using dbms_stats – In some cases, stale or non-representative statistics generated by the dbms_utility.analyze_schema package can cause suboptimal SQL execution plans, resulting in unnecessary disk waits. The solution is to use the dbms_stats package to analyze your schema. Also, note that if column data values are skewed adding histograms may also be necessary.
Distribute disk I/O across more spindles: Sometimes disk channel contention is responsible for physical read waits, which will show up in your disk monitor tool (iostat, EMC Symmetrics Manager, and so on). If you experience disk waits as a result of hardware contention, you can stripe the offending objects across multiple disk spindles by reorganizing the object and using the MINEXTENTS and NEXT parameters to stripe the object across multiple data files on multiple disks or use volume manager or I/O subsystem provided striping mechanisms.
Use the KEEP pool: For reducing scattered reads, many experts recommend implementing the KEEP pool. In the Oracle Magazine article “Advanced Tuning with Statspack”, the author notes that that small-table full-table scans should be placed in the KEEP pool to reduce scattered read waits.
Increase the data buffer cache size: Obviously, the more data blocks we keep in RAM, the smaller the probability of read waits.