Use the following option when calling adpatch: options=hotpatch EXAMPLE: adpatch defaultsfile=/u02/app/applmgr/11.5/admin/SIDNAME/def.txt \ logfile=all_5162862.log \ patchtop=/copy/APPS_PATCHES/2006/002/5162862 \ driver=u5162862.drv \ workers=4 \ interactive=yes \ options=novalidate,hotpatch In the above example we will: apply patch# 5162862 using u5162862.drv driver using 4 workers in interactive mode without validating pre-reqs without enabling maintenance mode (hotpatch) NOTE: you can safely drop defaultsfile from the call if you don't have one created
Wednesday, May 18, 2011
APPS - how to apply patch HOT (without enabling maintenance mode)
clone ORACLE_HOME 10gR2 to another host
Here's a step by step process to clone 10gR2 ORACLE_HOME to another identical server. Since the host names are different it's advisable to follow this procedure anytime you copy an ORACLE_HOME to another server even if directory structure is exactly the same. ## ## First detach ORACLE_HOME from central inventory ## atlas.10GR2-> pwd /u01/app/oracle/oraInventory/ContentsXML atlas.10GR2-> grep "HOME NAME" * inventory.xml:inventory.xml: atlas.10GR2-> ## detach OraDb10g_home1 home (get directory from above grep results) ## atlas.10GR2-> $ORACLE_HOME/oui/bin/runInstaller -detachhome ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. The inventory pointer is located at /var/opt/oracle/oraInst.loc The inventory is located at /u01/app/oracle/oraInventory 'DetachHome' was successful. atlas.10GR2-> ## verify using grep ## NOTE now it says REMOVED="T" ## atlas.10GR2-> grep "HOME NAME" * inventory.xml: inventory.xml: atlas.10GR2-> ## ## re-register ORACLE_HOME ## ## MAKE SURE ALL PROCS ARE DOWN - it does a relink ## atlas.10GR2-> ps -ef | grep oracle oracle 4824 4822 0 Oct 08 pts/2 0:00 -ksh oracle 4822 4819 0 Oct 08 ? 0:00 /usr/lib/ssh/sshd oracle 5352 4864 0 16:57:33 pts/3 0:00 ps -ef oracle 4862 4859 0 Oct 08 ? 0:01 /usr/lib/ssh/sshd oracle 4864 4862 0 Oct 08 pts/3 0:00 -ksh oracle 5353 4864 0 16:57:33 pts/3 0:00 -ksh ## run clone script ## cd $ORACLE_HOME/clone/bin perl clone.pl ORACLE_HOME="/u01/app/oracle/product/10.2.0/db_1" ORACLE_HOME_NAME="OraDb10g_home1" Here's a sample output of above command: atlas.10GR2-> perl clone.pl ORACLE_HOME="/u01/app/oracle/product/10.2.0/db_1" ORACLE_HOME_NAME="OraDb10g_home1" ./runInstaller -silent -clone -waitForCompletion "ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1" "ORACLE_HOME_NAME=OraDb10g_home1" -noConfig -nowait Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. Preparing to launch Oracle Universal Installer from /tmp/OraInstall2007-10- 09_04-59-54PM. Please wait ...Oracle Universal Installer, Version 10.2.0.3.0 Production Copyright (C) 1999, 2006, Oracle. All rights reserved. You can find a log of this install session at: /u01/app/oracle/oraInventory/logs/cloneActions2007-10-09_04-59-54PM.log ........................................................................... ......................... 100% Done. Installation in progress (Tue Oct 09 17:00:23 PDT 2007) ................................................................................. 81% Done. Install successful Linking in progress (Tue Oct 09 17:00:45 PDT 2007) Link successful Setup in progress (Tue Oct 09 17:02:33 PDT 2007) Setup successful End of install phases.(Tue Oct 09 17:02:43 PDT 2007) WARNING: The following configuration scripts need to be executed as the "root" user. #!/bin/sh #Root script to run /u01/app/oracle/product/10.2.0/db_1/root.sh To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts The cloning of OraDb10g_home1 was successful. Please check '/u01/app/oracle/oraInventory/logs/cloneActions2007-10-09_04-59-54PM.log' for more details. atlas.10GR2-> ## after above clone completes run ROOT.SH as ROOT ## atlas.10GR2-> su Password: # # /u01/app/oracle/product/10.2.0/db_1/root.sh Running Oracle10 root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/10.2.0/db_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. # ## ## Verify inventory can be read by OPATCH ## atlas.10GR2-> /u01/app/oracle/product/10.2.0/db_1/OPatch/opatch lsinventory Invoking OPatch 10.2.0.3.2 Oracle interim Patch Installer version 10.2.0.3.2 Copyright (c) 2007, Oracle Corporation. All rights reserved.. Oracle Home : /u01/app/oracle/product/10.2.0/db_1 Central Inventory : /u01/app/oracle/oraInventory from : /var/opt/oracle/oraInst.loc OPatch version : 10.2.0.3.2 OUI version : 10.2.0.3.0 OUI location : /u01/app/oracle/product/10.2.0/db_1/oui Log file location : /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/opatch2007-10-09_17-09-19PM.log Lsinventory Output file location : /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/lsinv/lsinventory2007-10-09_17-09-19PM.txt -------------------------------------------------------------------------------- Installed Top-level Products (2): Oracle Database 10g 10.2.0.1.0 Oracle Database 10g Release 2 Patch Set 2 10.2.0.3.0 There are 2 products installed in this Oracle Home. Interim patches (22) : Patch 6121268 : applied on Wed Sep 26 20:02:34 PDT 2007 Created on 11 Jun 2007, 05:26:11 hrs PST8PDT Bugs fixed: 6121268 Patch 6121267 : applied on Wed Sep 26 20:02:27 PDT 2007 Created on 12 Jun 2007, 02:58:15 hrs PST8PDT Bugs fixed: 6121267 Patch 6121266 : applied on Wed Sep 26 20:02:21 PDT 2007 Created on 12 Jun 2007, 02:57:08 hrs PST8PDT Bugs fixed: 6121266 Patch 6121264 : applied on Wed Sep 26 20:02:15 PDT 2007 Created on 12 Jun 2007, 02:55:35 hrs PST8PDT Bugs fixed: 6121264 Patch 6121263 : applied on Wed Sep 26 20:02:08 PDT 2007 Created on 12 Jun 2007, 02:54:23 hrs PST8PDT Bugs fixed: 6121263 Patch 6121261 : applied on Wed Sep 26 20:02:02 PDT 2007 Created on 11 Jun 2007, 01:57:45 hrs PST8PDT Bugs fixed: 6121261 Patch 6121260 : applied on Wed Sep 26 20:01:56 PDT 2007 Created on 11 Jun 2007, 00:43:23 hrs PST8PDT Bugs fixed: 6121260 Patch 6121258 : applied on Wed Sep 26 20:01:45 PDT 2007 Created on 12 Jun 2007, 08:36:08 hrs PST8PDT Bugs fixed: 6121258 Patch 6121257 : applied on Wed Sep 26 20:01:39 PDT 2007 Created on 12 Jun 2007, 01:54:53 hrs PST8PDT Bugs fixed: 6121257 Patch 6121250 : applied on Wed Sep 26 20:01:32 PDT 2007 Created on 11 Jun 2007, 21:47:03 hrs PST8PDT Bugs fixed: 6121250 Patch 6121249 : applied on Wed Sep 26 20:01:17 PDT 2007 Created on 11 Jun 2007, 21:46:24 hrs PST8PDT Bugs fixed: 6121249 Patch 6121248 : applied on Wed Sep 26 20:01:11 PDT 2007 Created on 11 Jun 2007, 08:52:17 hrs PST8PDT Bugs fixed: 6121248 Patch 6121247 : applied on Wed Sep 26 20:01:00 PDT 2007 Created on 10 Jun 2007, 23:58:00 hrs PST8PDT Bugs fixed: 6121247 Patch 6121246 : applied on Wed Sep 26 20:00:55 PDT 2007 Created on 12 Jun 2007, 08:53:32 hrs PST8PDT Bugs fixed: 6121246 Patch 6121245 : applied on Wed Sep 26 19:59:57 PDT 2007 Created on 10 Jun 2007, 23:05:51 hrs PST8PDT Bugs fixed: 6121245 Patch 6121244 : applied on Wed Sep 26 19:59:42 PDT 2007 Created on 10 Jun 2007, 23:01:41 hrs PST8PDT Bugs fixed: 6121244 Patch 6121243 : applied on Wed Sep 26 19:59:27 PDT 2007 Created on 10 Jun 2007, 22:16:48 hrs PST8PDT Bugs fixed: 6121243 Patch 6121242 : applied on Wed Sep 26 19:59:21 PDT 2007 Created on 11 Jun 2007, 21:43:02 hrs PST8PDT Bugs fixed: 6121242 Patch 6121183 : applied on Wed Sep 26 19:59:13 PDT 2007 Created on 12 Jun 2007, 01:52:07 hrs PST8PDT Bugs fixed: 6121183 Patch 6079591 : applied on Wed Sep 26 19:59:07 PDT 2007 Created on 14 Jun 2007, 03:24:12 hrs PST8PDT Bugs fixed: 6079591 Patch 5556081 : applied on Wed Sep 26 19:27:35 PDT 2007 Created on 9 Nov 2006, 22:20:50 hrs PST8PDT Bugs fixed: 5556081 Patch 5557962 : applied on Wed Sep 26 19:27:24 PDT 2007 Created on 9 Nov 2006, 23:23:06 hrs PST8PDT Bugs fixed: 4269423, 5557962, 5528974 -------------------------------------------------------------------------------- OPatch succeeded. atlas.10GR2->
source:-http://kb.dbatoolz.com/ex/pwpkg.dp?p_key=11&p_what=detail&p_sr_id=2694&p_sc_id=19&p_debug=&p_search=&p_suser=&p_sdate=#top
Oracle Database 10g: The Top 20 Features for DBAs
Oracle Database 10g: The Top 20 Features for DBAs - (Complete PDF file)
Part 1—SQL and PL/SQL Features - Transparent Data Encryption - XML Query - Enhanced COMMIT - Error-Logging Clause - WRAP Package - Conditional Compilation - Unlimited DBMS Output Part 2—Manageability Features - ASM Command Line Tool - Drop Empty Datafiles - Direct SGA Access for Hung/Slow Systems - Redefine a Partition Online - Block Integrity Checking in Memory, Not Just on Disk - Online Limit Changes - Faster Startup - Manage Multiple Objects in Oracle Enterprise Manager - Automatic Segment Advisor - Event-Based Scheduling Part 3—Performance Features - Hung But Not Paralyzed: Memory-Attached SGA Query - Interruptible SQL Access Advisor - Check for Tracing Enabled - Activity Session History - Optimizer Statistics Management - Transporting AWR Data - Compare Periods Report Part 4—Data Warehousing and Integration Features - Partition Change-Tracking Without MV Logs - Query Rewrite with Multiple MVs - Transportable Tablespace From Backup - Quick Partition Split for Partitioned Index-Organized Tables - LONG to LOB Conversion via Online Redef - Online Reorg of a Single Partition - Partition-by-Partition Table Drop Part 5—Backup and Availability Features - Oracle Secure Backup - Dynamic RMAN Views for Past and Current Jobs - Dynamic Channel Allocation for Oracle RAC Clusters - Tempfiles Recovery via RMAN - Flashback Database/Query Through RESETLOGS - Flashback Database Restore Points - Flash Recovery Area View
Create Oracle 10gR2 standby database using Real Time Apply
Create Oracle 10gR2 standby database using Real Time Apply ## ## Configuration Details ## PRIMARY site ------------- hostname=zephir ORACLE_SID=10GR2 DB_UNIQUE_NAME=leftcube ------------- $HOME/.profile ---------------- export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 export ORACLE_SID=10GR2 export ORACLE_HOSTNAME=zephir.domain.com umask 022 ##export DISPLAY="zephir:0.0" export PATH=$PATH:$ORACLE_HOME/bin:/usr/ccs/bin:/usr/bin:/etc:/usr/openwin/bin PS1="`/usr/ucb/hostname`.$ORACLE_SID-> " set -o vi set history=50 ################################################################### # Oracle Alias' For Trace Files and Alert Files ################################################################### alias -x pfile="cd /u01/app/oracle/admin/10GR2/pfile" alias -x bdump="cd /u01/app/oracle/admin/10GR2/bdump" alias -x cdump="cd /u01/app/oracle/admin/10GR2/cdump" alias -x udump="cd /u01/app/oracle/admin/10GR2/udump" ################################################################### STANDBY site ------------- hostname=atlas ORACLE_SID=10GR2 DB_UNIQUE_NAME=rightcube ------------- $HOME/.profile ---------------- export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 export ORACLE_SID=10GR2 export ORACLE_HOSTNAME=atlas.domain.com umask 022 ##export DISPLAY="atlas:0.0" export PATH=$PATH:$ORACLE_HOME/bin:/usr/ccs/bin:/usr/bin:/etc:/usr/openwin/bin PS1="`/usr/ucb/hostname`.$ORACLE_SID-> " set -o vi set history=50 ################################################################### # Oracle Alias' For Trace Files and Alert Files ################################################################### alias -x pfile="cd /u01/app/oracle/admin/10GR2/pfile" alias -x bdump="cd /u01/app/oracle/admin/10GR2/bdump" alias -x cdump="cd /u01/app/oracle/admin/10GR2/cdump" alias -x udump="cd /u01/app/oracle/admin/10GR2/udump" ################################################################### ## ## CONFIGURE PRIMARY DATABASE ## ## Create a Password File (or verify you have one) ## ls -l $ORACLE_HOME/dbs/orapw$ORACLE_SID -rw-r----- 1 oracle dba 1536 Oct 1 15:00 /u01/app/oracle/product/10.2.0/db_1/dbs/orapw10GR2 ## Enable Forced Logging ## alter database force logging; ## Configure Standby Redo Logs ## ## NOTE: ## A standby redo log is required for the maximum protection and maximum ## availability modes and the LGWR ASYNC transport mode is recommended for ## all databases. Data Guard can recover and apply more redo data from a ## standby redo log than from archived redo log files alone. ## ## The size of the current standby redo log files must exactly match the ## size of the current primary database online redo log files. ## ## Minimally, the configuration should have one more standby redo log file ## group than the number of online redo log file groups on the primary ## database. ## ## the recommended number of standby redo log file groups is: ## ## (maximum number of logfiles for each thread + 1) * maximum number of threads ## ## For example, if the primary database has 2 log files for each thread and ## 2 threads, then 6 standby redo log file groups are needed on the standby ## database. ## ## Although the standby redo log is only used when the database is running ## in the standby role, Oracle recommends that you create a standby redo ## log on the primary database so that the primary database can switch over ## quickly to the standby role without the need for additional DBA ## intervention. SQL> col member format a35 SQL> set lines 132 SQL> set trims on SQL> SQL> r 1* select group#,type,member from v$logfile GROUP# TYPE MEMBER ---------- ------- ----------------------------------- 1 ONLINE /u01/oradata/10GR2/redo01.log 1 ONLINE /u02/oradata/10GR2/redo01.log 1 ONLINE /u03/oradata/10GR2/redo01.log 2 ONLINE /u01/oradata/10GR2/redo02.log 2 ONLINE /u02/oradata/10GR2/redo02.log 2 ONLINE /u03/oradata/10GR2/redo02.log 3 ONLINE /u01/oradata/10GR2/redo03.log 3 ONLINE /u02/oradata/10GR2/redo03.log 3 ONLINE /u03/oradata/10GR2/redo03.log 9 rows selected. SQL> select group#,THREAD#,BYTES,status from v$log; GROUP# THREAD# BYTES STATUS ---------- ---------- ---------- ---------------- 1 1 52428800 INACTIVE 2 1 52428800 INACTIVE 3 1 52428800 CURRENT ## based on the above information create 3+1 = 4 new standby groups ## each with 3 member - you could do 2 members or just 1 it's up to you ## the main thing is that there needs to be 4 standby groups, multiple ## members simply add redundancy to the setup they do not increase ## performance. ## ## Note that I name each group member the same this underscores that ## these files are multiplexed and are identical ## alter database add standby logfile group 4 ('/u01/oradata/10GR2/stby04.log', '/u02/oradata/10GR2/stby04.log', '/u03/oradata/10GR2/stby04.log') size 52428800; alter database add standby logfile group 5 ('/u01/oradata/10GR2/stby05.log', '/u02/oradata/10GR2/stby05.log', '/u03/oradata/10GR2/stby05.log') size 52428800; alter database add standby logfile group 6 ('/u01/oradata/10GR2/stby06.log', '/u02/oradata/10GR2/stby06.log', '/u03/oradata/10GR2/stby06.log') size 52428800; alter database add standby logfile group 7 ('/u01/oradata/10GR2/stby07.log', '/u02/oradata/10GR2/stby07.log', '/u03/oradata/10GR2/stby07.log') size 52428800; ## Verify the standby redo log file groups were created ## SQL> select group#,type,member from v$logfile; GROUP# TYPE MEMBER ---------- ------- ----------------------------------- 1 ONLINE /u01/oradata/10GR2/redo01.log 1 ONLINE /u02/oradata/10GR2/redo01.log 1 ONLINE /u03/oradata/10GR2/redo01.log 2 ONLINE /u01/oradata/10GR2/redo02.log 2 ONLINE /u02/oradata/10GR2/redo02.log 2 ONLINE /u03/oradata/10GR2/redo02.log 3 ONLINE /u01/oradata/10GR2/redo03.log 3 ONLINE /u02/oradata/10GR2/redo03.log 3 ONLINE /u03/oradata/10GR2/redo03.log 4 STANDBY /u01/oradata/10GR2/stby04.log 4 STANDBY /u02/oradata/10GR2/stby04.log GROUP# TYPE MEMBER ---------- ------- ----------------------------------- 4 STANDBY /u03/oradata/10GR2/stby04.log 5 STANDBY /u01/oradata/10GR2/stby05.log 5 STANDBY /u02/oradata/10GR2/stby05.log 5 STANDBY /u03/oradata/10GR2/stby05.log 6 STANDBY /u01/oradata/10GR2/stby06.log 6 STANDBY /u02/oradata/10GR2/stby06.log 6 STANDBY /u03/oradata/10GR2/stby06.log 7 STANDBY /u01/oradata/10GR2/stby07.log 7 STANDBY /u02/oradata/10GR2/stby07.log 7 STANDBY /u03/oradata/10GR2/stby07.log 21 rows selected. SQL> select group#,thread#,sequence#,archived,status from v$standby_log; GROUP# THREAD# SEQUENCE# ARC STATUS ---------- ---------- ---------- --- ---------- 4 0 0 YES UNASSIGNED 5 0 0 YES UNASSIGNED 6 0 0 YES UNASSIGNED 7 0 0 YES UNASSIGNED SQL> ## setup archive log ## ## NOTE: ## this is not the right setup for DataGuard but it's a typical ## setup on our production systems so I am simulating it here which ## will be changed in the next step ## zephir.10GR2-> mkdir /u02/oradata/10GR2/arch zephir.10GR2-> sqlplus /nolog SQL> connect / as sysdba Connected. SQL> alter database close; Database altered. SQL> show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string /u01/app/oracle/product/10.2.0 /db_1/dbs/spfile10GR2.ora SQL> alter system set log_archive_dest_1='LOCATION=/u02/oradata/10GR2/arch' scope=both; System altered. SQL> alter database archivelog; Database altered. SQL> shutdown immediate; ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 327155712 bytes Fixed Size 2029880 bytes Variable Size 171968200 bytes Database Buffers 150994944 bytes Redo Buffers 2162688 bytes Database mounted. Database opened. SQL> SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination /u02/oradata/10GR2/arch Oldest online log sequence 28 Next log sequence to archive 30 Current log sequence 30 SQL> SQL> !ls -l /u02/oradata/10GR2/arch total 0 SQL> alter system switch logfile; System altered. SQL> r 1* alter system switch logfile System altered. SQL> r 1* alter system switch logfile System altered. SQL> !ls -l /u02/oradata/10GR2/arch total 64826 -rw-r----- 1 oracle dba 33161216 Oct 5 14:08 1_30_634834860.dbf -rw-r----- 1 oracle dba 1024 Oct 5 14:08 1_31_634834860.dbf -rw-r----- 1 oracle dba 3584 Oct 5 14:08 1_32_634834860.dbf SQL> ## Set Primary Database Initialization Parameters ## in this case there are two boxes both are in the ## same cubicle. Since DB_UNIQUE_NAME is normally tied ## to a physical location which removes any confusion ## when database's roles are switched I am setting ## DB_UNIQUE_NAME to: ## ## leftcube AND rightcube ## ## at this moment leftcube's role is PRIMARY ## and rightcube's role is STANDBY ## ## DB_NAME on both will be 10GR2 ## ## TWO tns-aliases will be setup for each of these names ## ## Note that the example specifies the LGWR process and asynchronous (ASYNC) ## network transmission to transmit redo data on the LOG_ARCHIVE_DEST_2 ## initialization parameter. These are the recommended settings and require ## standby redo log files ## alter system set db_unique_name=leftcube scope=spfile; alter system set log_archive_config='DG_CONFIG=(leftcube,rightcube)' scope=spfile; alter system set log_archive_dest_1= 'LOCATION=/u02/oradata/10GR2/arch VALID_FOR=(all_logfiles,all_roles) DB_UNIQUE_NAME=leftcube' scope=spfile; alter system set log_archive_dest_2= 'SERVICE=rightcube LGWR ASYNC VALID_FOR=(online_logfiles,primary_role) DB_UNIQUE_NAME=rightcube' scope=spfile; alter system set log_archive_dest_state_1=ENABLE scope=spfile; alter system set log_archive_dest_state_2=ENABLE scope=spfile; ## NOTE: ## Specifying the initialization parameters shown in Example 3–4 sets up the ## primary database to resolve gaps, converts new datafile and log file path ## names from a new primary database, and archives the incoming redo data when ## this database is in the standby role. With the initialization parameters ## for both the primary and standby roles set as described, none of the ## parameters need to change after a role transition. ## ## I am not using DB_FILE_NAME_CONVERT because I always try to keep ## directory structure exactly the same ## alter system set fal_server=rightcube scope=spfile; alter system set fal_client=leftcube scope=spfile; alter system set standby_file_management=AUTO scope=spfile; ## this one is just a DUMMY setup to avoid ORA-19527 see: ## avoid ORA-19527 set dummy log_file_name_convert ## alter system set log_file_name_convert='junk','junk' scope=spfile; shutdown immediate; startup; ## AFTER you enable these setting and until you have setup ## the actual standby database you will be receiving the following ## errors in the alert log: ## ## ORA-12154: TNS:could not resolve the connect identifier specified ## LGWR: Error 12154 creating archivelog file 'rightcube' ## LNS: Failed to archive log 2 thread 1 sequence 35 (12154) ## ## this is due to LGWR not being able to send REDO over to rightcube ## IT'S OK since log_archive_dest_2 has a flag of ASYNC which is ## implicitly makes this destination OPTIONAL ## ## take a hotbackup of primary database ## ## I used a custom backup script called hotbk.sh zephir.10GR2-> pwd /u01/app/oracle/admin/10GR2/backup zephir.10GR2-> grep "Header" hotbk.sh # $Header$ hotbk.sh 05-OCT-2007 2.1 zephir.10GR2-> grep bkdir= hotbk.sh bkdir=/copy/db_backup/10GR2/dbfs zephir.10GR2-> mkdir -p /copy/db_backup/10GR2/dbfs zephir.10GR2-> ./hotbk.sh zephir.10GR2-> Fri Oct 5 16:48:03 PDT 2007: getting datafile list ... Fri Oct 5 16:48:04 PDT 2007: building backup script ... mv: cannot access /copy/db_backup/10GR2/dbfs/recover.sh Fri Oct 5 16:48:04 PDT 2007: running backup script (logfile=/tmp/13363.hotbk.sh.log) ... zephir.10GR2-> ls -l /copy/db_backup/10GR2/dbfs/ total 1831154 -rw-r----- 1 oracle dba 14270464 Oct 5 16:48 2007_10_05.backup.ctl -rw-r--r-- 1 oracle dba 212 Oct 5 16:48 recover.sh -rw-r----- 1 oracle dba 230694912 Oct 5 16:48 sysaux01.dbf -rw-r----- 1 oracle dba 471867392 Oct 5 16:48 system01.dbf -rw-r----- 1 oracle dba 214966272 Oct 5 16:48 undotbs01.dbf -rw-r----- 1 oracle dba 5251072 Oct 5 16:48 users01.dbf zephir.10GR2-> cat /copy/db_backup/10GR2/dbfs/recover.sh cp -p ./sysaux01.dbf /u03/oradata/10GR2/sysaux01.dbf cp -p ./system01.dbf /u02/oradata/10GR2/system01.dbf cp -p ./undotbs01.dbf /u03/oradata/10GR2/undotbs01.dbf cp -p ./users01.dbf /u02/oradata/10GR2/users01.dbf zephir.10GR2-> ## Create a Control File for the Standby Database ## alter database create standby controlfile as '/copy/db_backup/10GR2/dbfs/stby_control.ctl'; alter system archive log current; ## Prepare Initialization Parameter File for the Standby Database ## ## run this on PRIMARY create pfile='/copy/db_backup/10GR2/dbfs/init10GR2.ora' from spfile; ## Set initialization parameters on the physical standby database ## cp -p /copy/db_backup/10GR2/dbfs/init10GR2.ora \ /copy/db_backup/10GR2/dbfs/init10GR2_stby.ora vi /copy/db_backup/10GR2/dbfs/init10GR2_stby.ora --------- MODIFY AS FOLLOWS -------- *.db_unique_name='RIGHTCUBE' *.fal_client='RIGHTCUBE' *.fal_server='LEFTCUBE' *.log_archive_dest_1='LOCATION=/u02/oradata/10GR2/arch VALID_FOR=(all_logfiles,all_roles) DB_UNIQUE_NAME=rightcube' *.log_archive_dest_2='SERVICE=leftcube LGWR ASYNC VALID_FOR=(online_logfiles,primary_role) DB_UNIQUE_NAME=leftcube' zephir.10GR2-> diff /copy/db_backup/10GR2/dbfs/init10GR2.ora \ > /copy/db_backup/10GR2/dbfs/init10GR2_stby.ora 15c15 < *.db_unique_name='LEFTCUBE' --- > *.db_unique_name='RIGHTCUBE' 17,18c17,18 < *.fal_client='LEFTCUBE' < *.fal_server='RIGHTCUBE' --- > *.fal_client='RIGHTCUBE' > *.fal_server='LEFTCUBE' 23,25d22 < DB_UNIQUE_NAME=leftcube' < *.log_archive_dest_2='SERVICE=rightcube LGWR ASYNC < VALID_FOR=(online_logfiles,primary_role) 26a24,26 > *.log_archive_dest_2='SERVICE=leftcube LGWR ASYNC > VALID_FOR=(online_logfiles,primary_role) > DB_UNIQUE_NAME=leftcube' ## Copy Files from the Primary System to the Standby System ## I used rdist you can also use FTP or any other method ## ## on standby verify primary host is added to .rhosts atlas.10GR2-> grep zephir $HOME/.rhosts zephir.domain.com oracle ## rdist all files over to standby host zephir.10GR2-> rsh -l oracle -n atlas "mkdir -p /copy/db_backup/10GR2/dbfs" zephir.10GR2-> rdist -c /copy/db_backup/10GR2/dbfs/* oracle@atlas:/copy/db_backup/10GR2/dbfs/ updating host atlas installing: /copy/db_backup/10GR2/dbfs/2007_10_05.backup.ctl installing: /copy/db_backup/10GR2/dbfs/init10GR2.ora installing: /copy/db_backup/10GR2/dbfs/init10GR2_stby.ora installing: /copy/db_backup/10GR2/dbfs/recover.sh installing: /copy/db_backup/10GR2/dbfs/stby_control.ctl installing: /copy/db_backup/10GR2/dbfs/sysaux01.dbf installing: /copy/db_backup/10GR2/dbfs/system01.dbf installing: /copy/db_backup/10GR2/dbfs/undotbs01.dbf installing: /copy/db_backup/10GR2/dbfs/users01.dbf zephir.10GR2-> ## rdist archive logs zephir.10GR2-> rsh -l oracle -n atlas "mkdir -p /u02/oradata/10GR2/arch" zephir.10GR2-> rdist -c /u02/oradata/10GR2/arch/* oracle@atlas:/u02/oradata/10GR2/arch/ updating host atlas installing: /u02/oradata/10GR2/arch/1_30_634834860.dbf installing: /u02/oradata/10GR2/arch/1_31_634834860.dbf installing: /u02/oradata/10GR2/arch/1_32_634834860.dbf installing: /u02/oradata/10GR2/arch/1_33_634834860.dbf installing: /u02/oradata/10GR2/arch/1_34_634834860.dbf installing: /u02/oradata/10GR2/arch/1_35_634834860.dbf installing: /u02/oradata/10GR2/arch/1_36_634834860.dbf ## copy password file zephir.10GR2-> cd /u01/app/oracle/product/10.2.0/db_1/dbs/ zephir.10GR2-> rdist -c orapw10GR2 oracle@atlas:/u01/app/oracle/product/10.2.0/db_1/dbs/orapw10GR2 updating host atlas installing: orapw10GR2 ## Configure listeners for the primary and standby databases. ## ## primary (leftcube) zephir.10GR2-> cat listener.ora SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = 10GR2.DOMAIN.COM) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (SID_NAME = 10GR2) ) ) LISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = zephir.domain.com)(PORT = 1521)) ) zephir.10GR2-> cat tnsnames.ora rightcube = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = atlas.domain.com)(PORT = 1521)) ) (CONNECT_DATA = (SID = 10GR2) ) ) 10GR2 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = zephir.domain.com)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = 10GR2.domain.com) ) ) ## standby (rightcube) atlas.10GR2-> cat listener.ora SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = 10GR2.DOMAIN.COM) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (SID_NAME = 10GR2) ) ) LISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = atlas.domain.com)(PORT = 1521)) ) atlas.10GR2-> cat tnsnames.ora leftcube = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = zephir.domain.com)(PORT = 1521)) ) (CONNECT_DATA = (SID = 10GR2) ) ) 10GR2 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = atlas.domain.com)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = 10GR2.domain.com) ) ) ## startup listener services on both standby and primary zephir.10GR2-> lsnrctl start atlas.10GR2-> lsnrctl start # test TNS aliases zephir.10GR2-> tnsping rightcube TNS Ping Utility for Solaris: Version 10.2.0.3.0 - Production on 05-OCT-2007 17:44:43 Copyright (c) 1997, 2006, Oracle. All rights reserved. Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = atlas.domain.com)(PORT = 1521))) (CONNECT_DATA = (SID = 10GR2))) OK (0 msec) atlas.10GR2-> tnsping leftcube TNS Ping Utility for Solaris: Version 10.2.0.3.0 - Production on 05-OCT-2007 17:45:09 Copyright (c) 1997, 2006, Oracle. All rights reserved. Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = zephir.domain.com)(PORT = 1521))) (CONNECT_DATA = (SID = 10GR2))) OK (0 msec)## ## Create PHYSICAL standby ## ## Create server parameter file on the standby database. ## atlas.10GR2-> ls -l /copy/db_backup/10GR2/dbfs/ total 1859050 -rw-r----- 1 oracle dba 14270464 Oct 5 16:48 2007_10_05.backup.ctl -rw-r--r-- 1 oracle dba 1391 Oct 5 16:56 init10GR2.ora -rw-r--r-- 1 oracle dba 1391 Oct 5 17:06 init10GR2_stby.ora -rw-r--r-- 1 oracle dba 212 Oct 5 16:48 recover.sh -rw-r----- 1 oracle dba 14270464 Oct 5 16:55 stby_control.ctl -rw-r----- 1 oracle dba 230694912 Oct 5 16:48 sysaux01.dbf -rw-r----- 1 oracle dba 471867392 Oct 5 16:48 system01.dbf -rw-r----- 1 oracle dba 214966272 Oct 5 16:48 undotbs01.dbf -rw-r----- 1 oracle dba 5251072 Oct 5 16:48 users01.dbf atlas.10GR2-> atlas.10GR2-> sqlplus /nolog SQL*Plus: Release 10.2.0.3.0 - Production on Fri Oct 5 17:46:49 2007 Copyright (c) 1982, 2006, Oracle. All Rights Reserved. SQL> connect / as sysdba Connected to an idle instance. SQL> SQL> SQL> !ls -l /u01/app/oracle/product/10.2.0/db_1/dbs total 48 -rw-r----- 1 oracle dba 8385 Sep 11 1998 init.ora -rw-r--r-- 1 oracle dba 12920 May 3 2001 initdw.ora -rw-r----- 1 oracle dba 1536 Oct 1 15:00 orapw10GR2 SQL> create spfile from pfile='/copy/db_backup/10GR2/dbfs/init10GR2_stby.ora'; File created. SQL> !ls -l /u01/app/oracle/product/10.2.0/db_1/dbs total 56 -rw-r----- 1 oracle dba 8385 Sep 11 1998 init.ora -rw-r--r-- 1 oracle dba 12920 May 3 2001 initdw.ora -rw-r----- 1 oracle dba 1536 Oct 1 15:00 orapw10GR2 -rw-r----- 1 oracle dba 3584 Oct 5 17:48 spfile10GR2.ora ## copy standby controlfile to it's proper location ## SQL> !grep control /u01/app/oracle/product/10.2.0/db_1/dbs/spfile10GR2.ora *.control_files='/u01/oradata/10GR2/control01.ctl', '/u02/oradata/10GR2/control02.ctl','/u03/oradata/10GR2/control03.ctl' SQL> !cp -p /copy/db_backup/10GR2/dbfs/stby_control.ctl /u01/oradata/10GR2/control01.ctl SQL> !cp -p /copy/db_backup/10GR2/dbfs/stby_control.ctl /u02/oradata/10GR2/control02.ctl SQL> !cp -p /copy/db_backup/10GR2/dbfs/stby_control.ctl /u03/oradata/10GR2/control03.ctl ## copy all datafiles to their locations ## SQL> !cat /copy/db_backup/10GR2/dbfs/recover.sh cp -p ./sysaux01.dbf /u03/oradata/10GR2/sysaux01.dbf cp -p ./system01.dbf /u02/oradata/10GR2/system01.dbf cp -p ./undotbs01.dbf /u03/oradata/10GR2/undotbs01.dbf cp -p ./users01.dbf /u02/oradata/10GR2/users01.dbf SQL> !cd /copy/db_backup/10GR2/dbfs/; sh ./recover.sh SQL> !find /*/oradata/10GR2 -type f -ls 95290 13944 -rw-r----- 1 oracle dba 14270464 Oct 5 18:01 /u01/oradata/10GR2/control01.ctl 5829 32408 -rw-r----- 1 oracle dba 33161216 Oct 5 14:08 /u02/oradata/10GR2/arch/1_30_634834860.dbf 5830 1 -rw-r----- 1 oracle dba 1024 Oct 5 14:08 /u02/oradata/10GR2/arch/1_31_634834860.dbf 5831 4 -rw-r----- 1 oracle dba 3584 Oct 5 14:08 /u02/oradata/10GR2/arch/1_32_634834860.dbf 5832 192 -rw-r----- 1 oracle dba 187904 Oct 5 14:51 /u02/oradata/10GR2/arch/1_33_634834860.dbf 5833 144 -rw-r----- 1 oracle dba 134656 Oct 5 15:01 /u02/oradata/10GR2/arch/1_34_634834860.dbf 5834 1648 -rw-r----- 1 oracle dba 1678336 Oct 5 16:48 /u02/oradata/10GR2/arch/1_35_634834860.dbf 5835 12 -rw-r----- 1 oracle dba 11776 Oct 5 16:55 /u02/oradata/10GR2/arch/1_36_634834860.dbf 5836 13944 -rw-r----- 1 oracle dba 14270464 Oct 5 18:01 /u02/oradata/10GR2/control02.ctl 5837 461048 -rw-r----- 1 oracle dba 471867392 Oct 5 16:48 /u02/oradata/10GR2/system01.dbf 5838 5136 -rw-r----- 1 oracle dba 5251072 Oct 5 16:48 /u02/oradata/10GR2/users01.dbf 5905 13944 -rw-r----- 1 oracle dba 14270464 Oct 5 18:01 /u03/oradata/10GR2/control03.ctl 5906 225408 -rw-r----- 1 oracle dba 230694912 Oct 5 16:48 /u03/oradata/10GR2/sysaux01.dbf 5907 210040 -rw-r----- 1 oracle dba 214966272 Oct 5 16:48 /u03/oradata/10GR2/undotbs01.dbf ## Start the physical standby database. ## SQL> startup mount; ORACLE instance started. Total System Global Area 327155712 bytes Fixed Size 2029880 bytes Variable Size 192939720 bytes Database Buffers 130023424 bytes Redo Buffers 2162688 bytes Database mounted. SQL> ## now create standby redo files (pre-req to enable "Real Time Apply") ## ## NOTE: ## 1) this should match exactly what was done on primary database (see ## above "Configure Standby Redo Logs") ## 2) this is optional in fact the first time around I forgot to create ## these logs on standby and recovery process was still working ## it just reports the following error: ## RFS[1]: Unable to open standby log 5: 313 ## and uses ARCHIVE logs instead ## alter database add standby logfile group 4 ('/u01/oradata/10GR2/stby04.log', '/u02/oradata/10GR2/stby04.log', '/u03/oradata/10GR2/stby04.log') size 52428800; alter database add standby logfile group 5 ('/u01/oradata/10GR2/stby05.log', '/u02/oradata/10GR2/stby05.log', '/u03/oradata/10GR2/stby05.log') size 52428800; alter database add standby logfile group 6 ('/u01/oradata/10GR2/stby06.log', '/u02/oradata/10GR2/stby06.log', '/u03/oradata/10GR2/stby06.log') size 52428800; alter database add standby logfile group 7 ('/u01/oradata/10GR2/stby07.log', '/u02/oradata/10GR2/stby07.log', '/u03/oradata/10GR2/stby07.log') size 52428800; ## Start Redo Apply ## ## NOTE: ## The statement includes the DISCONNECT FROM SESSION option so that Redo Apply ## runs in a background session. See Section 6.3, "Applying Redo Data to Physical ## Standby Databases" for more information. ## ## OPTION A is recommended ## ## OPTION A) [enables "Real Time Apply"] ## SQL> alter database recover managed standby database using current logfile disconnect from session; Database altered. ## OPTION B) ["Real Time Apply" is disabled] SQL> alter database recover managed standby database disconnect from session; Database altered. ## on primary verify ARCHIVE seqs ## SQL> alter session set nls_date_format='YYYY-MON-DD HH24:MI:SS'; Session altered. SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#; 2 SEQUENCE# FIRST_TIME NEXT_TIME ---------- -------------------- -------------------- 30 2007-OCT-03 19:35:08 2007-OCT-05 14:08:29 31 2007-OCT-05 14:08:29 2007-OCT-05 14:08:31 32 2007-OCT-05 14:08:31 2007-OCT-05 14:08:37 33 2007-OCT-05 14:08:37 2007-OCT-05 14:51:48 34 2007-OCT-05 14:51:48 2007-OCT-05 15:01:54 35 2007-OCT-05 15:01:54 2007-OCT-05 16:48:24 35 2007-OCT-05 15:01:54 2007-OCT-05 16:48:24 36 2007-OCT-05 16:48:24 2007-OCT-05 16:55:46 36 2007-OCT-05 16:48:24 2007-OCT-05 16:55:46 37 2007-OCT-05 16:55:46 2007-OCT-05 18:09:07 37 2007-OCT-05 16:55:46 2007-OCT-05 18:09:07 SEQUENCE# FIRST_TIME NEXT_TIME ---------- -------------------- -------------------- 38 2007-OCT-05 18:09:07 2007-OCT-05 18:11:42 38 2007-OCT-05 18:09:07 2007-OCT-05 18:11:42 13 rows selected. SQL> ## compare them to standby ## SQL> alter session set nls_date_format='YYYY-MON-DD HH24:MI:SS'; Session altered. SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#; 2 SEQUENCE# FIRST_TIME NEXT_TIME ---------- -------------------- -------------------- 35 2007-OCT-05 15:01:54 2007-OCT-05 16:48:24 36 2007-OCT-05 16:48:24 2007-OCT-05 16:55:46 37 2007-OCT-05 16:55:46 2007-OCT-05 18:09:07 38 2007-OCT-05 18:09:07 2007-OCT-05 18:11:42 SQL> ## Verify new archived redo log files were applied ## SQL> select sequence#,applied from v$archived_log order by sequence#; SEQUENCE# APP ---------- --- 35 YES 36 YES 37 YES 38 YES SQL> SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination /u02/oradata/10GR2/arch Oldest online log sequence 34 Next log sequence to archive 0 Current log sequence 39 SQL> !ls -lta /u02/oradata/10GR2/arch total 71926 -rw-r----- 1 oracle dba 52429312 Oct 5 18:17 1_39_634834860.dbf -rw-r----- 1 oracle dba 1379328 Oct 5 18:11 1_37_634834860.dbf drwxr-xr-x 2 oracle dba 512 Oct 5 18:11 . -rw-r----- 1 oracle dba 11776 Oct 5 18:11 1_36_634834860.dbf -rw-r----- 1 oracle dba 1678336 Oct 5 18:11 1_35_634834860.dbf -rw-r----- 1 oracle dba 113152 Oct 5 18:11 1_38_634834860.dbf drwxr-xr-x 3 oracle dba 512 Oct 5 18:03 .. -rw-r----- 1 oracle dba 134656 Oct 5 15:01 1_34_634834860.dbf -rw-r----- 1 oracle dba 187904 Oct 5 14:51 1_33_634834860.dbf -rw-r----- 1 oracle dba 3584 Oct 5 14:08 1_32_634834860.dbf -rw-r----- 1 oracle dba 33161216 Oct 5 14:08 1_30_634834860.dbf -rw-r----- 1 oracle dba 1024 Oct 5 14:08 1_31_634834860.dbf SQL> ## ## Here's how Real Time Apply works ## ## Test 1) - switch a log on PRIMARY ## here's what you'll see in the primary alert log: LNS1 started with pid=18, OS id=13637 Fri Oct 5 18:48:38 2007 Thread 1 advanced to log sequence 41 Current log# 2 seq# 41 mem# 0: /u01/oradata/10GR2/redo02.log Current log# 2 seq# 41 mem# 1: /u02/oradata/10GR2/redo02.log Current log# 2 seq# 41 mem# 2: /u03/oradata/10GR2/redo02.log Fri Oct 5 18:48:39 2007 ARC0: Standby redo logfile selected for thread 1 sequence 40 for destination LOG_ARCHIVE_DEST_2 Fri Oct 5 18:48:39 2007 LNS: Standby redo logfile selected for thread 1 sequence 41 for destination LOG_ARCHIVE_DEST_2 here's what's reported in the STANDBY alert log: Fri Oct 5 18:48:38 2007 RFS[1]: Successfully opened standby log 4: '/u01/oradata/10GR2/stby04.log' Fri Oct 5 18:48:39 2007 Redo Shipping Client Connected as PUBLIC -- Connected User is Valid RFS[2]: Assigned to RFS process 3909 RFS[2]: Identified database type as 'physical standby' Primary database is in MAXIMUM PERFORMANCE mode Primary database is in MAXIMUM PERFORMANCE mode RFS[2]: Successfully opened standby log 4: '/u01/oradata/10GR2/stby04.log' Fri Oct 5 18:48:40 2007 Media Recovery Log /u02/oradata/10GR2/arch/1_40_634834860.dbf Media Recovery Waiting for thread 1 sequence 41 (in transit) Fri Oct 5 18:48:40 2007 Recovery of Online Redo Log: Thread 1 Group 4 Seq 41 Reading mem 0 Mem# 0: /u01/oradata/10GR2/stby04.log Mem# 1: /u02/oradata/10GR2/stby04.log Mem# 2: /u03/oradata/10GR2/stby04.log ## Test 2) - WITHOUT SWITCHING A LOG ## ## On STANDBY run the following SQL: SQL> select process, status,sequence#,block#,blocks, delay_mins from v$managed_standby; PROCESS STATUS SEQUENCE# BLOCK# BLOCKS DELAY_MINS --------- ------------ ---------- ---------- ---------- ---------- ARCH CLOSING 40 1 16 0 ARCH CLOSING 41 1 1611 0 MRP0 APPLYING_LOG 42 42 102400 0 RFS IDLE 0 0 0 0 RFS IDLE 42 45 1 0 SQL> ## On PRIMARY create a test table SQL> create table test(t number); Table created. SQL> insert into test values (1); 1 row created. SQL> insert into test values (2); 1 row created. SQL> insert into test values (3); 1 row created. SQL> commit; Commit complete. SQL> *** DO NOT SWITCH A LOG *** ## On STANDBY run the same SQL (note BLOCK# increasing): SQL> r 1* select process, status,sequence#,block#,blocks, delay_mins from v$managed_standby PROCESS STATUS SEQUENCE# BLOCK# BLOCKS DELAY_MINS --------- ------------ ---------- ---------- ---------- ---------- ARCH CLOSING 40 1 16 0 ARCH CLOSING 41 1 1611 0 MRP0 APPLYING_LOG 42 76 102400 0 RFS IDLE 0 0 0 0 RFS IDLE 42 78 1 0 SQL> ## Cancel out of real time apply and open it in read only mode SQL> alter database recover managed standby database cancel; Database altered. SQL> alter database open read only; Database altered. SQL> select * from test; T ---------- 1 2 3 ## Place the standby back in managed recovery mode ## This will take the standby directly from read only mode ## and place it in managed recovery mode ## SQL> alter database recover managed standby database using current logfile disconnect; Database altered.---------------------------------------------------source:-http://kb.dbatoolz.com/ex/pwpkg.dp?p_key=11&p_what=detail&p_sr_id=2693&p_sc_id=19&p_debug=&p_search=&p_suser=&p_sdate=#top
Calculate the size of the RMAN backup
I was told by backup team that oracle is taking lot of resources w.r.t tape ..thus they asked me to find out for each database what is the size of the backup ..
It is very difficult to get the size of the rman backup's directly but following SQL will help u to get the approx size in TB about the rman backup sets in tape . This is valid only for 9i ..It may work in 10G also but not checked and tested !!!
select sum(blocks*block_size)/1024/1024/1024/1024 from rc_backup_datafile ;
select sum(blocks*block_size)/1024/1024/1024/1024 from RC_BACKUP_REDOLOG ;
Above SQL's have to be run in sqlplus in RMAN catalog database ..
Hope this helps to determine the cost of tape's
use of EXEC FND_CONC_CLONE.SETUP_CLEAN
Cloning of Oracle Application
This is one of the most usual tasks given to Apps DBA and from my past exp., DBA's spend more than 70% of time in Cloning.
Cloning Oracle Apps 11i is done through a tool given by Oracle known as RAPID CLONE and it is very easy to use but it does not do all the bits....
When u clone the env, there is a ref of the source env in the database in FND_NODES tables , thus after clone if u visit the OAM page , u will see both source nodes and target nodes . Also u will be some concurrent managers which have got ref to source nodes.
If u want to get rid of the source nodes , Do the following :Follow the Rapid clone document (Metalink note : 230672.1)
- Prepare the Source System ( Pre Clone Steps )
- Copy the Source System to the Target System
- Configure the Target System
o Configure the target system database server
o After the database is created ,connect as apps and execute the following
EXEC FND_CONC_CLONE.SETUP_CLEAN;
Commit;
o Above cmd will clean the FND_NODES table and does something more than that i.e.
c lear the concurrent manager tables etc.
o Run the autoconfig.sh again on the database tier
Note:
Be careful not to run this on production, it cleans out all printer definitions, previsously executed requests logs and outputs. It is okay to run only on the freshly created environment.And there after follow the doc Metalink note: 230672.1
--------------------------------
APPS - how to cleanup FND_NODES table to clear corrupted setup ## ## before cleanup ## 13:42:56 APPS@XTPR:host01> r 1 select node_name, node_mode, support_cp, 2 support_web, support_admin, support_forms 3* from FND_NODES NODE_NAME N S S S S ------------------------------ - - - - - HOST01 O Y N Y N MIDTIER01 O N Y N Y APPSERVER21 N Y N Y <-- bogust host APPSERVER22 N Y N Y <-- bogust host APPSERVER23 N Y N Y <-- bogust host AUTHENTICATION O N N N N 6 rows selected. ## ## Cleanup ## 13:42:58 APPS@XTPR:host01> EXEC FND_CONC_CLONE.SETUP_CLEAN; PL/SQL procedure successfully completed. 13:44:12 APPS@XTPR:host01> commit; Commit complete. ## ## After cleanup ## 13:44:47 APPS@XTPR:host01> r 1 select node_name, node_mode, support_cp, 2 support_web, support_admin, support_forms 3* from FND_NODES no rows selected ## ## Run AutoConfig ## ## BE tier $AD_TOP/bin/adconfig.sh contextfile=$APPL_TOP/admin/XTPR_host01.xml appspass=## MT tier $AD_TOP/bin/adconfig.sh contextfile=$APPL_TOP/admin/XTPR_midtier01.xml appspass= ## ## After AutoConfig runs ## 13:58:15 APPS@XTPR:host01> r 1 select node_name, node_mode, support_cp, 2 support_web, support_admin, support_forms 3* from FND_NODES NODE_NAME N S S S S ------------------------------ - - - - - MIDTIER01 O N Y N Y AUTHENTICATION O N N N N HOST01 O Y N Y N 3 rows selected.
Source :--http://karmadba.blogspot.com/2007/04/cloning-of-oracle-application.html
http://kb.dbatoolz.com/tp/1179.apps_-_how_to_cleanup_fnd_nodes_table_to_clear_corrupted_setup.html
Monday, May 16, 2011
Saturday, May 14, 2011
Using RMAN Incremental Backups to Refresh a Standby Database
You can create an incremental backup of the target database containing changes to the database since the creation of the duplicate or the previous syncrhonization. You can apply the incremental backup to the standby database.
Note: This technique cannot be used to update a duplicate database. |
RMAN enables you to synchronize a standby database with a primary database by creating an incremental backup at the source database that contains all changed blocks since the duplicate was created or last refreshed. You then apply the incremental backup to the standby database, which updates it with all changes.
This capability faciliates the temporary conversion of a physcial standby database into a reporting database, as described in Oracle Data Guard Concepts and Administration.. In particular, this capability makes it possible to reverse the effects of converting the standby into a reporting database. After the standby database has been used for reporting or testing, Flashback Database can reverse any changes resulting from that work, returning the database to its contents when it was still a standby. An incremental backup created with BACKUP
INCREMENTAL
... FROM
SCN
can be used to refresh the standby with changes at the primary since the conversion. and then managed recovery can resume. The effect is to return the reporting database to its role as standby.
For more details on this scenario, see Oracle Data Guard Concepts and Administration.
Using BACKUP INCREMENTAL... FROM SCN
The incremental backup is created at the source database by means of the BACKUP
INCREMENTAL
FROM SCN
=
n
form of the BACKUP
command. For example:
BACKUP DEVICE TYPE SBT INCREMENTAL FROM SCN 750923 DATABASE; BACKUP INCREMENTAL FROM SCN 750923 DATABASE; BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 750983 DATABASE FORMAT '/tmp/incr_standby_%U';
RMAN uses the selected SCN as the basis for this incremental backup. For all files being backed up, RMAN includes all data blocks that were changed at SCNs greater than or equal to the FROM
SCN
in the incremental backup.
Note:
|
Refreshing a Standby Database With INCREMENTAL FROM SCN Backups: Example
This example shows the steps required to update a standby database using incremental backups. The assumption is that you have already activated the standby, performed your tests or other operations at the standby, , and then used Flashback Database to undo the effects of those changes. The task here is to refresh the standby with the latest changes to the primary , so that it can resume its role as a standby database.
Step 1: Create the Incremental Backup
Create the needed incremental backup at the source database, using BACKUP
with the INCREMENTAL FROM SCN
clause.
Assume that the incremental backup to be used in updating the duplicate database is to be created on disk, with the filenames for backup pieces determined by the format/tmp/incr_for_standby/bkup_%U
.
RMAN> BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 750983 DATABASE FORMAT '/tmp/incr_for_standby/bkup_%U';
Step 2: Make the Incremental Backup Accessible at the Standby Database
Make the backup pieces containing the incremental backup available in some directory accessible on the system containing the standby database. For this example, assume that the destination directory is called /standbydisk1/incrback/
and ensure that it contains nothing besides the incremental backups from Step 1.
Step 3: Catalog the Incremental Backup Files at the Standby Database
Use the RMAN CATALOG
command to register the backup sets in the RMAN repository at the duplicate. With an RMAN client connected to the standby database and the recovery catalog (if you use one at the standby), mount the standby and run the following command:
RMAN> CATALOG START WITH '/standbydisk1/incrback/';
The backups are now available for use in recovery of the standby.
Step 4: Apply the Incremental Backup to the Standby Database
Use the RMAN RECOVER
command with the NOREDO
option to apply the incremental backup to the standby database. All changed blocks captured in the incremental backup are updated at the standby database, bringing it up to date with the primary database. With an RMAN client connected to the standby database, run the following command:
RMAN> RECOVER DATABASE NOREDO;
You can now resume managed recovery at the standby. Any redo logs required at the standby with changes since those contained in the incremental are automatically requested from the primary and applied.
Friday, May 13, 2011
Cloning and Refreshing an Oracle Database
Here, in this post, I would like to explain and provide the information on the following Questions about Cloning and Refreshing of a Database with my simple terms.
Terms used in this post:
Source System - the system to be cloned - Production
Target System - the newly created (or cloned) system – Non Production
Production Database – PROD
Test Database – TEST
Development Database - DEV
What is a Database Clone?
* A database clone is an activity/procedure which is performed by every DBA on regular basis or when there is a requirement or request to do so from the different departments i.e. Test/Development teams.
* Cloning is nothing but creating a copy of production system in to a test or development environment. i.e. Having an exact image of production database in test area.
* Cloning is a procedure for preparing and creating a test or development servers with the copy of Oracle production database for testing upgrades, migrating an existing system to new hardware.
* A cloning process includes a copy of Oracle Home (Directories and Binaries) backup and Database (Database related files) backup to prepare the instance on another server.
* Though, it is possible to clone a database on the same server, Oracle doesn’t suggest to clone a database on the same server, where the production database is running.
What is a Database Refresh?
* A Database Refresh is also referred to as a database clone. However, we don’t clone Oracle Home rather we clone the Database as refresh.
* Refreshing a database is something like applying the changes or updates of production database to the database where the database is already cloned. i.e. let’s say you have cloned a database a month back, and now you are asked for doing refresh of a database, then you will perform the backup of database and prepare the clone the instance again on test server. This is nothing but refreshing.
* Refreshing of a particular table, group of tables, schema, or tablespace will be done using traditional export/import, transportable Tablespaces, or data pump methods.
* When an Oracle patch is applied on Production System, or in doubt, you have to prepare and clone the database again with the copy of Oracle Home (Directories and Binaries) Backup and Database (Database related files) Backup to prepare the instance.
* The difference between Cloning and Refreshing is that cloning process includes Oracle Home and database Clone; where as Refreshing process only includes database clone.
* If seen, the words, Clone and Refresh are used interchangeably for the sake of convenient.
When and why we Clone a Database?
* Generally production (PROD) database is cloned for various reasons and needs i.e. for something to be tested or something to be developed later those to be moved to production.
* It’s normal and quite common thing is that whenever there is any change or update to be performed and do not know the impact or effect after applying it on production (PROD), it’s required to be applied and tested on *NON* production database first (TEST or DEV), after the confirmation of change success, given by the users, then the changes will be moved to production.
* A Cloned test instance (TEST) for testing team/environment is exclusively used for testing the changes or issues which will be come severe on Production. Oracle Support gives the solution as fix when there is an issue in the database, so this fix needs to perform or apply on test/development databases.
* A Cloned development instance (DEV) for development team/environment is used for developing the new changes and then deploying the same on Production.
* A Cloned patch instance is used for patching to know the impact and the time required to apply the same on Production.
How to clone an Oracle Database and different ways of cloning.
There are many possible methods available for cloning a database, but each of them has pros and cons, and significance. Following are the methods.
Using Cold (Offline) Backup:
This is an easy and simple method to perform a clone of a database. This method requires your production database (PROD) needs to be shutdown gracefully, and take the backup of the database related files i.e. Data files, Control files, Redo Log files, using Operating System commands i.e. cp or copy. This is not possible where your PROD database is running 24/7 and should be available continuously for users.
For syntax and the series of steps to perform the clone using cold backup, refer the following URLs from the reference.
References:
http://www.samoratech.com/TopicOfInterest/swCloneDB.htm
http://www.pgts.com.au/pgtsj/pgtsj0211b.html
http://www.jlcomp.demon.co.uk/faq/clone_db.html
Using Hot (Online) Backup:
In this method, backup of the database will be done online i.e. without shutting down the database.
For this, your Production Database is must be in Archive log mode. For syntax and the series of steps to perform the clone using hot backup, refer the following URLs from the reference.
Reference:
http://www.quest-pipelines.com/newsletter/cloning.htm
http://www.oralnx.com/index.php/2007/03/22/cloning-an-oracle-database/
http://www.shutdownabort.com/quickguides/clone_hot.php
Using RMAN Commands:
Cloning can also be performed using RMAN Backups and RMAN commands and it’s also an easy method to perform so. The RMAN DUPLICATE command is used to perform the clone. Until Oracle 9i, to clone the database, it is required to be the Source and Target systems should have the same OS i.e. it is not possible to clone across the platform. But as workaround, using export/import can be cloning the database across the platforms. But starting from Oracle 10g the RMAN capabilities have improved immensely. Cross platform cloning/duplicating a database can be done using RMAN CONVERT commands.
For syntax and the series of steps to perform the clone using RMAN Commands, refer the following URLs from the reference.
References:
Creating and Updating Duplicate Databases with RMAN
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1008564
Cross-Platform Transportable Database: RMAN CONVERT DATABASE
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/dbxptrn002.htm#CHDCFFDI
Creating a Duplicate Database on a Local or Remote Host
http://sabdarsyed.blogspot.com/2007/06/clone-of-database-on-same-host-ie.html
http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb005.htm
Pre & Post Cloning Steps/Changes:
* Do *NOT* set the clone database name as good as production database Name.
* It’s *NOT* mandatory to have the initialization parameter values of cloned instance similar to Production Instance.
* It is *NOT* mandatory to have the cloned instance in Archive log mode. Because unnecessarily archive log files are generated, which consume the hard disk space? If at all, the cloned instance crashed and need to be recovered, it can easily be again cloned from the production.
* After the clone, change the system users passwords i.e. SYS & SYSTEM, and for any critical users passwords.
* Disable the jobs which are not required to be run in the cloned instance.
* Change any application users tables from the cloned database which are still referring the Production Database i.e. Server IP, Port Details, Printer Details etc,
Other Useful Links:
OTN Forums on Cloning:
http://forums.oracle.com/forums/search.jspa?threadID=&q=clone+a+database&objID=f61&dateRange=all&userID=&numResults=30&rankBy=10001
Ask Tom Forums:
DB cloning -- what is it and why
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:575623107841
Creating test environment from production box
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:526422273445
Metalink Notes:
Note:245262.1 - Subject: Create a Duplicate Database with NOCATALOG on Same Node
Note:458450.1 - Subject: Steps to Manually Clone a Database
Note:388431.1 - Subject: Creating a Duplicate Database on a New Host.
Conclusion: These are only my views and outlines about cloning and need *NOT* to be the same with others or Oracle. One individual can still follow their cloning procedure documents which are there in place with them. I strongly suggest one to do the changes first in the test/development before doing it on Production. And also, use Oracle Documentations and Metalink Support for any kind of help.
Hope that this post helps you in understanding the Cloning and Refreshing activity of a database.
------------------------------------------------------------------------
Source:-http://sabdarsyed.blogspot.com/2008/08/cloning-and-refreshing-oracle-database.html
ORA-30036: unable to extend segment by 4 in undo tablespace 'UNDOTBS1'
One of our databases crashed recently because the disks were taken offline by mistake – fortunately, it was a development one!
When I brought it back online and people started to use it, one of the developers got the following error message:
ERROR at line 1:
ORA-30036: unable to extend segment by 4 in undo tablespace 'UNDOTBS1'
I had a quick look and it appeared that there were queries which must have been running at the time of the crash which required recovery. It was not possible to use the UNDO space because it was being used for recovery.
To check if the UNDO is being used you can use the following:
Select (used_ublk*8192)/1024/1024 MB
From v$transaction;
The 8192 is the block size of the database so change it if you database has a different block size.
In Oracle 10g you can look in the v$session_longops view to see how long a transaction has to roll back.
I also noticed that there were lots of log switches in the alert log and relatively high disk queues on the SAN disk which held the data for the database in question.
In udump folder there are SMON files which report:
*** 2009-04-23 16:52:04.966
Parallel Transaction recovery caught exception 30319
Parallel Transaction recovery caught error 30319
Also in udump there are P001, P002....P007 files which report:
*** SERVICE NAME:(SYS$BACKGROUND) 2009-04-23 16:56:05.726
*** SESSION ID:(1088.37) 2009-04-23 16:56:05.726
Parallel Transaction recovery server caught exception 10388
Next, it's a nice idea to work out which objects are being recovered.
This query shows the parallel query processes and slaves which are running:
select
decode(px.qcinst_id,NULL,username,
' - '||lower(substr(s.program,length(s.program)-4,4) ) ) "Username",
decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
to_char( px.server_set) "Slave Set",
to_char(s.sid) "SID",
decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
px.req_degree "Requested DOP",
px.degree "Actual DOP"
from
gv$px_session px,
gv$session s
where
px.sid=s.sid (+)
and
px.serial#=s.serial#
order by 5 , 1 desc ;
Take the SID from the above query - there may be multiple ones - and put it into the query below:
select *
from gv$active_session_history
where session_id in (1098);
If you look at the CURRENT_OBJ# column, it will show you the object_id for the object which is being recovered.
It was still not possible to run the insert statement that the developer wanted to run so I investigated further.
If you also look in the gv$active_session_history view, you will see the column EVENT. This was showing as "wait for stopper event to be increased".
I found Metalink note: 464246.1 "Database Hang Due to parallel Transaction Recovery". If you can't see that, here is a summary:
=> Sometimes Parallel Rollback of Large Transaction may become very slow
=> Usually occurs when killing a long running process or aborting the database (this is what happened in my case)
=> In fast-start parallel rollback, the background process SMON acts as a coordinator and rolls back a set of transactions in parallel using multiple server processes.
=> When the amount of uncommitted data is above a certain threshold, and the database has to roll back, parallel recovery kicks in and uses SMON as the recovery co-ordinator.
=> Parallel slaves can contend with each other when trying to roll back in parallel, so the process may be faster in serial. To do so, use:
ALTER SYSTEM fast_start_parallel_rollback = false;
If you execute this command while it is trying to recover, it seems that further logons are not permitted. I received the following error when attempting to log on:
ORA-03113: end-of-file on communication channel
However, after some time it seemed to right itself. If you are interested in monitoring your UNDO usage, or monitoring your TEMP space usage, I would recommend you read my other articles on the subject.
How do I find out what is in my Oracle buffer Cache?
First of all, you can try running this query:
SELECT SIZE_FOR_ESTIMATE, BUFFERS_FOR_ESTIMATE, ESTD_PHYSICAL_READ_FACTOR, ESTD_PHYSICAL_READS
FROM V$DB_CACHE_ADVICE
WHERE NAME = 'KEEP'
AND BLOCK_SIZE = (SELECT VALUE FROM V$PARAMETER WHERE NAME = 'db_block_size')
AND ADVICE_STATUS = 'ON';
This many not return any rows depending upon what other initilisation parameters you have or have not set. You may find that there are rows in the V$DB_CACHE_ADVICE view, but with the ADVICE_STATUS column set to OFF. This means that the parameter DB_CACHE_ADVICE is set to either OFF or READY.
You can check by running this query:
SELECT *
FROM v$parameter
WHERE name LIKE 'db_cache%';
This parameter can be set to ON | OFF | READY
You can go from READY to ON, at the cost of CPU and memory overhead
Going from OFF to ON may cause error if done dynamically when the instance is up
Start DB with READY to allocate memory but not incur CPU overhead
Switch to ON for a set period of time - 1 day - then switch off. Check objects in pools
Below is an example of what you might like to know about what’s in your buffer cache.
Find the number of buffers in the instance:
SELECT value "total buffers"
FROM v$parameter
WHERE name = 'db_block_buffers';
Find out the object ID of the objects you are interested in (table or index)
SELECT data_object_id, object_type
FROM user_objects
WHERE object_name = '
Find the number of buffers in the buffer cache for segment_name:
SELECT count(*) buffers
FROM x$bh
WHERE obj =
Calculate the ratio of buffers to total buffers, to obtain the percentage of the cache currently used by segment_name
%cache used by segment_name = buffers (step2) / total buffers (step3)
Note: This technique works only for a single segment; for a partitioned object, the query must be run for each partition.