Tuesday, November 1, 2011

Notifications Not Being Received After Autoconfig Is Run

Notifications Not Being Received After Autoconfig Is Run
SolutionTo implement the solution, please execute the following steps:

1. Log into the Oracle Applications Manager.
2. Click on Site Map.
3. Click on Notification Mailer under Workflow.
4. Click on the Edit button for the active mailer.
5. Click Next to step 3 that lists the Inbound EMail Account and Outbound EMail Account.
6. Confirm that the servers names are correct with the IMAP and SMTP servers and change as needed.
7. Click Next and then Click on Finish.
8. Stop and re-start the workflow mailer services within OAM and test.
9. If the issue is resolved, please migrate the solution as appropriate to other environments.

Permanent solutions:


Autoconfig Overwriting The SMTP Server Name For The Java Mailer


[applmgr@test]$ cat UAT_test.xmlgrep s_smtphost

hostname oa_var= s_smtphost test hostname

[applmgr@test]$ cat UAT_test.xmlgrep s_smtpdomainname

domain oa_var=s_smtpdomainname doyen.com domain

1. Update the following parameters in context file on the Concurrent Manager Application Tier node under oa_smtp_server in OAM to the correct Outbound Server Name assigned to the Workflow Mailer.

Example

SMTP Server Host (s_smtphost)
hostname oa_var= s_smtphost mailtest hostname

Email Server Domain (s_smtpdomainname)
domain oa_var=s_smtpdomainname doyen.com domain

2. The next time autoconfig runs on the node it will contain correct setting.

--------------------------------------------------

Thursday, October 27, 2011

AFPASSWD Utility

AFPASSWD Utility
AFPASSWD is an enhanced version of FNDCPASS, and includes the following features:
• AFPASSWD only prompts for passwords required for the current operation,
allowing separation of duties between applications administrators and database
administrators. This also improves interoperability with Oracle Database Vault. In
contrast, the FNDCPASS utility currently requires specification of the APPS and the
SYSTEM usernames and corresponding passwords, preventing separation of duties
between applications administrators and database administrators.
• When changing a password with AFPASSWD, the user is prompted to enter the
new password twice to confirm.
• AFPASSWD can be run from the database tier as well as the application tier. In
contrast, FNDCPASS can only be run from the application tier.
FNDCPASS will continue to be shipped with Oracle E-Business Suite, and customers
can migrate to the AFPASSWD utility at their discretion.
Important: The FNDCPASS utility must still be used to migrate the
password hashing scheme, as described in My Oracle Support
Document 457166.1, FNDCPASS Utility New Feature: Enhance Security
With Non-Reversible Hash Password.
AFPASSWD Usage
The AFPASSWD command is used with the relevant command line options to perform
the desired action.
AFPASSWD [-c [@]] [-f ]
AFPASSWD [-c [@]] [-o ]
AFPASSWD [-c [@]] [-a]
AFPASSWD [-c [@]] [-l [] |
[]]
AFPASSWD [-c [@]] [-L [] | []]
AFPASSWD [-c [@]] [-s]
These options have the following functions:
• -c {APPSUSER}[@{TWO_TASK}] - Specifies the connection string to use, the
Applications user, and/or the value of TWO_TASK. This option can be use in
combination with others. If it is not specified, default values from the environment
will be used.
Note: The password will be prompted for, and is not to be
provided in the connection string.
• -f {FNDUSER} - Changes the password for an Applications user. A username that
contains spaces must be enclosed in double quotation marks; for example, "JOHN
SMITH".
• -o {DBUSER} - Changes the password for an Oracle E-Business Suite database user.
Note: This only applies to users listed in the
FND_ORACLE_USERID table, not database users in general.
• -a - Changes all Oracle (ALLORACLE) passwords (except the passwords of APPS,
APPLSYS, APPLSYSPUB) to the same password, in the same way as the
ALLORACLE mode does in FNDCPASS.
• -l - Locks individual {ORACLE_USER} users (except required schemas). {TRUE} =
LOCK, {FALSE} = UNLOCK.
• -L - Locks all Oracle (ALLORACLE) users (except required schemas). {TRUE} =
LOCK, {FALSE} = UNLOCK.
• -s {APPLSYS} - Changes the password for the APPLSYS user and the APPS user.
This requires the execution of AutoConfig on all tiers.
• -h - Displays help.

--------------
Source:-http://download.oracle.com/docs/cd/B53825_04/current/acrobat/121sacg.pdf
Chapter 11
--------------

Saturday, October 22, 2011

Distributed AD

1)Distributed AD offers improved scalability, performance, and resource utilization by allowing workers of the same AD session to be started on additional middle tier systems.

2)AD has always utilized a Parallel Jobs System, where multiple AD workers start and are assigned jobs. Information for the Jobs System is stored in the database, and workers receive their assignments by monitoring certain tables in the database.

3) Distributed AD allows workers to be started on remote machines, where they can utilize the resources on the remote machines when completing their assigned jobs


Prerequistes
1) Shared APPL_TOP
2) AD.H


Working
On one of your shared APPL_TOP nodes, start your AutoPatch or AD Administration session with the following command line options:

localworkers= workers=
For example to run an AutoPatch session with 3 workers on the local node and 5 workers on a remote node:

adpatch localworkers=3 workers=8
On one or more of the additional shared APPL_TOP nodes, start an AD Controller session with the following command line option:

adctrl distributed=y

After providing basic information, AD Controller will prompt for the worker number(s) to be started. For example, enter "4 5 6 7 8" or "4-8" to start workers 4 through 8. If AD Controller is started prior to AutoPatch or AD Administration starting the Jobs System, AD Controller will ask if you want to wait. Choosing yes will cause AD Controller to wait until the Jobs system is started, at which point it will start the appropriate worker processes. If an AutoPatch session has already been started, AD Controller will wait automatically.



Example of a two node session with five workers:

Node 1) adpatch localworkers=30 workers=20

Node 2) adctrl distributed=y and choose Enter the worker range 21-30
-----------------
Source:-http://appsoracle.blogspot.com/2011/07/distributed-ad.html
-----------------

Oracle DBA: DBA_JOBS facts

Query to see the jobs running through dba_jobs

select /*+ rule */ * from dba_jobs_running;

or

select /*+ ordered */ * from dba_jobs_running;

Common Reason why jobs don't execute automatically and as scheduled
1) select instance_name,logins from v$instance;
If the logins=RESTRICTED, then:
alter system disable restricted session;

2) check the JOB_QUEUE_PROCESSES
show parameter JOB_QUEUE_PROCESSES
It should greater then 0

3)Is the job BROKEN?
select job,broken from dba_jobs where job=;
If broken, then check the alert log and trace files to diagnose the issue

4) _SYSTEM_TRIG_ENABLED=FALSE
Check if _system_enabled_trigger=false
col parameter format a25
col value format a15
select a.ksppinm parameter,b.ksppstvl value from x$ksppi a,x$ksppcv b
Where a.indx=b.indx and ksppinm=’_system_trig_enabled’;
-----------------
Source:-http://appsoracle.blogspot.com/2011/04/oracle-dba-dbajobs-facts.html
-----------------

Oracle apps: Output post Processor(OPP)

BlogThis!Share to TwitterShare to Facebook
The Output Post Processor (OPP) is an enhancement to Concurrent Processing and is designed to support XML Publisher as post-processing action for concurrent requests. If a request is submitted with an XML Publisher template specified as a layout for the concurrent request output, then after the concurrent manager finishes running the concurrent program, it will contact the OPP to apply the XML Publisher template and create the final output.

The Output Post Processor makes use of the Oracle Streams Advanced Queuing (AQ) database feature. Every OPP service instance monitors the FND_CP_GSM_OPP_AQ queue for new messages and this queue has been created with no value specified for primary_instance (link). This implies that the queue monitor scheduling and propagation is done in any available instance. In other words, ANY OPP service instance may pick up an incomming message independent of the node on which the concurrent request ran.


Maximum Memory Usage Per Process:

The maximum amount of memory or maximum Java heap size a single OPP process can use is by default set to 512MB. This value is seeded by the Loader Data File: $FND_TOP/patch/115/import/US/afoppsrv.ldt which specifies that the DEVELOPER_PARAMETERS is "J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m". At the time of writing (Sep 2007), there's no user interface available which allows this value to be altered (Bug 4247067). The alternative is to alter the value using SQL*Plus

Determine the current maximum Java heap size:

SELECT service_id, service_handle, developer_parameters
FROM fnd_cp_services
WHERE service_id = (SELECT manager_type
FROM fnd_concurrent_queues
WHERE concurrent_queue_name = 'FNDCPOPP');

SERVICE_ID SERVICE_HANDLE DEVELOPER_PARAMETERS
---------- -------------- --------------------------------------------------------
1091 FNDOPP J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m

Increase the maximum Java heap size for the OPP to 1024MB (1GB):

UPDATE fnd_cp_services
SET developer_parameters =
'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx1024m'
WHERE service_id = (SELECT manager_type
FROM fnd_concurrent_queues
WHERE concurrent_queue_name = 'FNDCPOPP');

The OPP queue can be Recreated the using $FND_TOP/patch/115/sql/afopp002.sql file as 'APPLSYS' user. On running the script you will be prompted for username and password.

There are 2 new profiles options that can be used to control the timeouts

Profile Option : Concurrent:OPP Response Timeout
Internal Name : CONC_PP_RESPONSE_TIMEOUT
Description : Specifies the amount of time a manager waits for OPP to respond to its request for post processing.

Profile Option : Concurrent:OPP Process Timeout
Internal Name : CONC_PP_PROCESS_TIMEOUT
Description : Specifies the amount of time the manager waits for the OPP to actually process the request.
----------------------
Source:-http://appsoracle.blogspot.com/2011/05/oracle-apps-output-post-processoropp.html
----------------------

Oracle apps:FNDFS working

1) The user selects ‘Request Output’, ‘Request Log’, or ‘Manager Log’

2) The file name and nodename are selected from the database.
Reports:
SELECT outfile_name, outfile_node_name FROM fnd_concurrent_requests
WHERE request_id = :id;
Logfiles:
SELECT logfile_name, logfile_node_name FROM fnd_concurrent_requests
WHERE request_id = :id;
Manager logs:
SELECT logfile_name, node_name FROM fnd_concurrent_processes
WHERE concurrent_process_id = :process_id;


4. The client takes the nodename that was returned and adds FNDFS_ to the
beginning of it. For example, if xprod_ser1 was returned as the node name, the
client would construct the string: FNDFS_xprod_ser1

5. The client takes this string and attempts to use it as a SQL*Net connect
descriptor. SQL*Net will attempt to resolve this descriptor into a host and SID
using a local tnsnames.ora file or Oracle Names.
6. If successfully resolved, a connection is made to the given host. The listener on
this host receives the connection request, and resolves the SID using its
listener.ora file. If it finds a PROGRAM parameter listed for this SID, it will
launch this program. (which should be $FND_TOP/bin/FNDFS)

7. The FNDFS executable runs. The client sends RPC commands to it to return the
requested file.

Common error
An error occurred while attempting to establish an application file server
connection with the node There may be a network connection
problem or the listener on node may not be running.

-This can indicate a multitude of problems, and unfortunately, it does not display anymore helpful messages.

-This most commonly indicates a problem with the local tnsnames.ora file or the
listener.ora file on the server. Check that the customer added an FNDFS entry to the
tnsnames.ora file.

-Check that the hostname and port are correct. Make sure that the entry is named
FNDFS_hostname. Also, if the customer has edited the file himself, he may have
inadvertently corrupted the file. SQL*Net is very picky about the syntax of this file.

-An extra space or carriage return could cause RRA to fail. The only supported
method of editing this file is to use Network Manager. Have the customer backup the
old file, then create a new one with Network Manager if you suspect that this file
may be bad.

-Once you are sure the tnsnames.ora file is correct, you should be able to use TNSPing to ping the listener. (Be sure to ping the FNDFS alias) It should return an OK result.

-Errors here may indicate that the listener is not properly set up.

-Make sure that you are using the exact name of the server (check this with: uname -n) If you have the wrong name in the tnsnames file entry, the tnsping will work, but
RRA will not. For example, suppose the server’s real name is XPROD_SER1, and you
create a tnsnames entry called: FNDFS_DBSERV because you have a DNS alias for
this server. You can ping the server normally, because DNS will resolve this name for
you. You can run TNSPing with FNDFS_DBSERV and it will resolve this connect
string and it will ping the server and return an OK result. This would lead you to
believe that everything is OK on the client side. However, RRA still does not work.
This is because RRA is using the real server name, and it is trying to resolve the
connect string FNDFS_XPROD_SER1, and this entry does not exist. A client-side trace
would discover this error.
----------
Source:-http://appsoracle.blogspot.com/2011/05/oracle-appsfndfs-working.html
----------

Staged APPL_TOP in R12

A staged Applications system represents an exact copy of your Production system, including all APPL_TOPs as well as a copy of the Production database. Patches are applied to this staged system, while your Production system remains up. When all patches have been successfully applied to the test system, the reduced downtime for the Production system can begin. The staged APPL_TOP is used both to run the database update into the Production database as well as synchronizing the production APPL_TOP.

Pre steps
1.Compare Topologies

A staged Applications system must duplicate the topology of your Production system. For example, each physical APPL_TOP of your Production system must exist in your staged system.
2.Verify Snapshot

Prior to copying the Production Applications system, ensure that the snapshot of the system is up-to-date. While the current snapshot should automatically be managed by AutoPatch, verification can be done by running the Maintain Current Snapshot task in AD Administration. This should be done for each APPL_TOP in your Applications system. Having the snapshot of your Production Applications system current will ensure proper patch prerequisite checking when patches are applied.
3.Create the Staged System

Create a clone of your Production database and of each APPL_TOP of your Production Applications system. Production and Staged should have the same APPL_TOP names, as this will ensure the patching history for your staged APPL_TOP will be correct in the Production system. Historical information is stored in the context of an APPL_TOP, and when patch history data is imported into Production it needs to have the same APPL_TOP names. The database of your staged APPL_TOP should have a different ORACLE_SID to avoid accidental connections to Production. Passwords, ports and any process or service related parameters may be changed as well to further reduce risks.

. You must have different Applications system names for staged and Production. AutoPatch will correct the historical information. Your staged APPL_TOP name should be the same as your Production APPL_TOP name for the database driver to update the patch history information correctly.
Apply Patches to the Staged System
The staged system is patched the same way as any Oracle Applications system using AutoPatch to apply the patch drivers.

Update the Production System

1.Update the Production Database
Once patching the staged environment is complete, you are ready to update your Production system. Ensure you are able to connect to your Production database from your staged systems. You may need to create a tnsnames file in your staged system with entries for Production. You can use the s_ifile AutoConfig variable for this purpose. Refer to Appendix C of OracleMetaLink Note 387859.1, Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12.

Once your environment is set correctly, and all services on the Production system have been disabled, run AutoPatch for the database portion of the patch you wish to apply, by specifying options=nocopyportion, nogenerateportion on the AutoPatch command line. Ensure the database name prompted by AutoPatch is correct.

If you applied multiple patches to the staged system, you will need to run the database update for each patch you applied to stage, in the same order. To reduce downtime further in such a case, you should consider merging patches prior to staging.
2.Update the Production APPL_TOP
The Production APPL_TOP needs to be synchronized with the staged APPL_TOP. To minimize downtime, you can complete this while the Production database is being updated. There are many ways to accomplish this task, ranging from a simple copy command to utilities such as rdist. Some storage providers offer hardware solutions as well. If your topology includes multiple APPL_TOPs, each APPL_TOP needs to be copied over to the Production system. If you share a single APPL_TOP, you only need to synchronize one system. The $COMMON_TOP directory, which on some systems may reside outside the APPL_TOP, also needs to be updated for each APPL_TOP in the Applications system.

Certain configuration files, log directories and environment scripts are specific to an APPL_TOP. These files and directories must be excluded when copying. (if using the rdist utility, you can use a distfile to exclude them)

Post steps
1) Synchronizing Patch Histories
The staged applications system strategy fragments your patch history. At this point in the process, the copy and generate portions of the patch history for patches applied using a staged applications system are stored in your staged database, and the database portion of the patch history for these patches is stored in both your staged database and in your production database. It is important that the patch history of your production system be complete. To accomplish this, you must now load the copy and generate portions of all patches applied using a staged applications system into your production database.
Use the adphmigr.pl utility located in the bin directory to export the patch history for the copy and generate portions of patches applied using a staged applications system from your staged database, then use AutoPatch to import the extracted patch history data into your production database. For each patch applied using a staged applications system, you must export patch history for each APPL_TOP in the staged applications system and import it for the corresponding APPL_TOP in the production applications system. Both exporting patch history data from the staged database and importing patch history data into the production database can be done with users on the production system. To ensure correct results, you should finish consolidating patch history for the production system before applying additional patches to it or using patch-related Oracle Applications Manager features on it.

a) Export Patch History
Use the adphmigr.pl utility. adphmigr.pl is located in the bin directory under AD_TOP. Enter adphmigr.pl -help to see all valid options for adphmigr.pl. We recommend that you export patch history for each APPL_TOP separately, as you will need to import it for each APPL_TOP separately.
Ensure you specify nodatabaseportion=Y on the adphmigr.pl command line. This ensures that the patch history data for the database portion of patches applied against the staged applications system is not exported. This data should not be imported into the production database, because the database portion of each patch has already been applied directly to the production database.
Export example:
$ perl $AD_TOP/bin/adphmigr.pl userid=apps/apps \
startdate='2007/10/10 00:00:00' enddate='2007/14/10 00:00:00' \
appsystemname=stage appltopname=tafnw1 nodatabaseportion=Y
This command will generate two data files for each run of AutoPatch on the staged APPL_TOP, one for java updates and one for all other patch actions. Check adphmigr.log to ensure the data files represent the patch runs you wish to export, and that the start and end times specified did not include any unwanted AutoPatch runs.
b) Import Patch History
You should have extracted a separate set of data files for each APPL_TOP in your staged applications system. For each APPL_TOP in your production applications system, copy the data files extracted for the corresponding staged APPL_TOP to the $APPL_TOP/admin/ directory. AutoPatch will automatically upload these data files the next time it runs in this APPL_TOP. To load the data files immediately, start AutoPatch in interactive mode, answer the prompts until prompted for the name of the patch driver file, then exit AutoPatch by entering "abort" at the patch driver file prompt.

----------------
Source:-http://appsoracle.blogspot.com/2011/07/staged-appltop-in-r12.html
----------------

Wednesday, October 19, 2011

Huge Events*.log files in $APPLCSF/$APPLLOG?

The other day I received an automated email alert that a partition was running low on available space. Its the partition which contains $APPLCSF/$APPLLOG (ie. $COMMON_TOP/admin/log/). This directory stores concurrent manager logs, concurrent request logs, etc.

I noticed one file, Events01.log was 7GB in size. I should add that this environment is pretty static, so there aren’t log of changes and it doesn’t get restarted often.

The issue is described in Note:601375.1, which says the culprit is the Fulfillment Server having a high level of debugging enabled. The fix is to change the parameter s_jto_debug_string = OFF in your context file. (Don’t edit this manually, use OAM.) However, to enable this change you’ll need to execute autoconfig.

If your not able to run autoconfig at this time (I prefer to bundle these types of changes with patches so that users will do a quick sanity check of the environment), you can manually edit the file $COMMON_TOP/admin/scripts//jtffmctl.sh and remove the references to:

-Dengine.LogLevel=9

-Ddebug=full

Once that change is made you need to stop apache (adapcctl.sh), the fulfillment server (jtffmctl.sh) and restart them. You can now remove that huge Events log file.

Note: If you remove an active file while a process is still pointing to it, the space will not be released. I’ve been asked by people many times why they removed a file but did not see the available space increase.

----------------
Source:-http://newappsdba.blogspot.com/search/label/E-Business%20Suite
----------------

Stuck Concurrent Requests

Stuck Concurrent Requests
Every now and then users call us with a concurrent request that is running longer than normal and/or blocking other batch jobs because of incompatibilities. Upon investigation we'll see that there is no database session for the request. Since there isn't a database session users may not be unable to cancel the request themselves. The cancel button will be grayed out. The solution is to clean the fnd_concurrent_requests table.

Background: Concurrent programs may be incompatible with other programs which means they cannot execute at the same time. If the stuck concurrent request has such rules defined, then programs it is incompatible with will not run until the problem is solved.

There are 2 ways to do this, update the table manually or run the Oracle provided cmclean.sql script. Depending on the method you choose, you'll need the request id. This can be provided by the user or you can look at the running requests via Oracle Applications Manager (OAM). To navigate there click on Site Map on the top left hand corner of the page. Under Concurrent requests click on Running.




Once your in the Running requests screen you'll see which programs are currently being executed. With the help of your users, find the request id in question and make note of it. The recommended approach from Oracle will be:

1. Kill the database sessions for the requests. (In our case there weren't any.)
2. Shutdown the concurrent managers.
3. Run the cmclean.sql script Note: 134007.1
4. Start your concurrent managers.

The other method is to update the bad rows in the fnd_concurrent_requests table manually.

update fnd_concurrent_requests set STATUS_CODE='D',phase_code='C' where request_id=

STATUS_CODE of D means Cancelled and a phase_code of C is completed.

For a list of status, phase_codes and what they mean, refer to Note: 297909.1.

The benefit to updating the fnd_concurrent_requests table manually is that no downtime is required. If you are using cmclean.sql remember to shutdown the concurrent managers first!

--------------------------
Source:--http://newappsdba.blogspot.com/search/label/EBS%20Concurrent%20Processing
--------------------------

Tuesday, October 18, 2011

Correcting invalid spfile parameters

Correcting invalid spfile parameters
tnsManager - Distribute tnsnames the easy way and for free!



Consider the following situation. An alteration is made to the spfile which results in the instance being unable to start. Because the instance will not start, the mistake can not be corrected:



SQL> show parameter sga_max_size
sga_max_size big integer 537989896

SQL> alter system set sga_max_size=99999999999 scope=spfile;
System altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORA-27102: out of memory

SQL> startup nomount
ORA-27102: out of memory

SQL> alter system set sga_max_size=537989896 scope=spfile;
alter system set sga_max_size=537989896 scope=spfile
*
ERROR at line 1:
ORA-01034: ORACLE not available
How annoying! The usual way to fix this problem (apart from being more careful in the first place) is to:
•create pfile from spfile
•edit the pfile
•startup nomount
•create spfile from pfile
•shutdown
•startup
•remove the pfile
There is another way however - and one that I prefer. It relies on the fact that a database can have a spfile and a pfile at the same time, and furthermore parameters specified in the pfile override those in the spfile! The spfile location must be specified in the pfile for this to work. Check out the following trace:

SQL> !
oracle@bloo$ vi $ORACLE_HOME/dbs/init${ORACLE_SID}.ora

spfile=/u02/oradata/scr9/spfilescr9.ora
sga_max_size=537989896
:wq
oracle@bloo$ exit

SQL> startup
ORACLE instance started.

Total System Global Area 554767132 bytes
Fixed Size 451356 bytes
Variable Size 402653184 bytes
Database Buffers 150994944 bytes
Redo Buffers 667648 bytes
Database mounted.
Database opened.

SQL> alter system set sga_max_size=537989896 scope=spfile;
System altered.



SQL> !rm $ORACLE_HOME/dbs/init${ORACLE_SID}.ora
SQL>





------------------------------------------------------------------------------------------------


Source :-

Saturday, June 11, 2011

Error while running ./adautocfg.sh after applying RUP6 patch on 11.5.10.2

Error while running ./adautocfg.sh after applying RUP6 patch on 11.5.10.2

sriappl@linux5 sridb_linux5]$./adautocfg.sh

Enter the APPS user password :

Invalid range "a-Z" in transliteration operator at/u05/shashi/sridbora/iAS/Apache/perl/lib/5.00503/vars.pm line 17.

Compilation failed in require at

/u05/shashi/sridbora/iAS/Apache/perl/lib/5.00503/AutoLoader.pm line 3.

BEGIN failed--compilation aborted at

/u05/shashi/sridbora/iAS/Apache/perl/lib/5.00503/AutoLoader.pm line 3.

Compilation failed in require at /opt/ActivePerl-5.8/lib/POSIX.pm line 11.

BEGIN failed--compilation aborted at /opt/ActivePerl-5.8/lib/POSIX.pm line 11.

Compilation failed in require at /u05/shashi/sridbappl/ad/11.5.0/bin/adconfig.pl line 88.

BEGIN failed--compilation aborted at /u05/shashi/sridbappl/ad/11.5.0/bin/adconfig.pl line

88.

[sriappl@linux5 sridb_linux5]$


Solution:

Open $IAS_ORACLE_HOME/Apache/perl/lib/5.00503/vars.pm and change (line 17):

if ($sym =~ tr/A-Za-Z_0-9//c) {

change to

if ($sym =~ tr/A-Za-z_0-9//c) {

(The change is the single character Z->z)

Tuesday, June 7, 2011

Schedule tasks on Linux using crontab



If you've got a website that's heavy on your web server, you might want to run some processes like generating thumbnails or enriching data in the background. This way it can not interfere with the user interface. Linux has a great program for this called cron. It allows tasks to be automatically run in the background at regular intervals. You could also use it to automatically create backups, synchronize files, schedule updates, and much more. Welcome to the wonderful world of crontab.

Crontab

The crontab (cron derives from chronos, Greek for time; tab stands for table) command, found in Unix and Unix-like operating systems, is used to schedule commands to be executed periodically. To see what crontabs are currently running on your system, you can open a terminal and run:

sudo crontab -l

To edit the list of cronjobs you can run:

sudo crontab -e

This wil open a the default editor (could be vi or pico, if you want you can change the default editor) to let us manipulate the crontab. If you save and exit the editor, all your cronjobs are saved into crontab. Cronjobs are written in the following format:

* * * * * /bin/execute/this/script.sh

Scheduling explained

As you can see there are 5 stars. The stars represent different date parts in the following order:

  1. minute (from 0 to 59)
  2. hour (from 0 to 23)
  3. day of month (from 1 to 31)
  4. month (from 1 to 12)
  5. day of week (from 0 to 6) (0=Sunday)

Execute every minute

If you leave the star, or asterisk, it means every. Maybe that's a bit unclear. Let's use the the previous example again:

* * * * * /bin/execute/this/script.sh

They are all still asterisks! So this means execute /bin/execute/this/script.sh:

  1. every minute
  2. of every hour
  3. of every day of the month
  4. of every month
  5. and every day in the week.

In short: This script is being executed every minute. Without exception.

Execute every Friday 1AM

So if we want to schedule the script to run at 1AM every Friday, we would need the following cronjob:

0 1 * * 5 /bin/execute/this/script.sh

Get it? The script is now being executed when the system clock hits:

  1. minute: 0
  2. of hour: 1
  3. of day of month: * (every day of month)
  4. of month: * (every month)
  5. and weekday: 5 (=Friday)

Execute on workdays 1AM

So if we want to schedule the script to Monday till Friday at 1 AM, we would need the following cronjob:

0 1 * * 1-5 /bin/execute/this/script.sh

Get it? The script is now being executed when the system clock hits:

  1. minute: 0
  2. of hour: 1
  3. of day of month: * (every day of month)
  4. of month: * (every month)
  5. and weekday: 1-5 (=Monday til Friday)

Execute 10 past after every hour on the 1st of every month

Here's another one, just for practicing

10 * 1 * * /bin/execute/this/script.sh

Fair enough, it takes some getting used to, but it offers great flexibility.

Neat scheduling tricks

What if you'd want to run something every 10 minutes? Well you could do this:

0,10,20,30,40,50 * * * * /bin/execute/this/script.sh

But crontab allows you to do this as well:

*/10 * * * * /bin/execute/this/script.sh

Which will do exactly the same. Can you do the the math? ;)

Special words

If you use the first (minute) field, you can also put in a keyword instead of a number:

@reboot     Run once, at startup @yearly     Run once  a year     "0 0 1 1 *" @annually   (same as  @yearly) @monthly    Run once  a month    "0 0 1 * *" @weekly     Run once  a week     "0 0 * * 0" @daily      Run once  a day      "0 0 * * *" @midnight   (same as  @daily) @hourly     Run once  an hour    "0 * * * * 

Leave the rest of the fields empty so this would be valid:

@daily /bin/execute/this/script.sh

Storing the crontab output

By default cron saves the output of /bin/execute/this/script.sh in the user's mailbox (root in this case). But it's prettier if the output is saved in a separate logfile. Here's how:

*/10 * * * * /bin/execute/this/script.sh 2>&1 >> /var/log/script_output.log

Explained

Linux can report on different levels. There's standard output (STDOUT) and standard errors (STDERR). STDOUT is marked 1, STDERR is marked 2. So the following statement tells Linux to store STDERR in STDOUT as well, creating one datastream for messages & errors:

2>&1

Now that we have 1 output stream, we can pour it into a file. Where > will overwrite the file, >> will append to the file. In this case we'd like to to append:

>> /var/log/script_output.log

Mailing the crontab output

By default cron saves the output in the user's mailbox (root in this case) on the local system. But you can also configure crontab to forward all output to a real email address by starting your crontab with the following line:

MAILTO="yourname@yourdomain.com"

Mailing the crontab output of just one cronjob

If you'd rather receive only one cronjob's output in your mail, make sure this package is installed:

aptitude install mailx

And change the cronjob like this:

*/10 * * * * /bin/execute/this/script.sh 2>&1 | mail -s "Cronjob ouput" yourname@yourdomain.com

Trashing the crontab output

Now that's easy:

*/10 * * * * /bin/execute/this/script.sh 2>&1 > /dev/null

Just pipe all the output to the null device, also known as the black hole. On Unix-like operating systems, /dev/null is a special file that discards all data written to it.

-------------------------------------------------------------------------------------------------------------------

http://kevin.vanzonneveld.net/techblog/article/schedule_tasks_on_linux_using_crontab/

Wednesday, May 18, 2011

APPS - how to apply patch HOT (without enabling maintenance mode)

Use the following option when calling adpatch:     options=hotpatch  EXAMPLE:     adpatch defaultsfile=/u02/app/applmgr/11.5/admin/SIDNAME/def.txt \         logfile=all_5162862.log \         patchtop=/copy/APPS_PATCHES/2006/002/5162862 \         driver=u5162862.drv \         workers=4 \         interactive=yes \         options=novalidate,hotpatch  In the above example we will:    apply patch# 5162862    using u5162862.drv driver    using 4 workers    in interactive mode    without validating pre-reqs    without enabling maintenance mode (hotpatch)  NOTE:    you can safely drop defaultsfile from the call if you don't have one created

clone ORACLE_HOME 10gR2 to another host

 Here's a step by step process to clone 10gR2 ORACLE_HOME to another identical  server.  Since the host names are different it's advisable to follow this procedure  anytime you copy an ORACLE_HOME to another server even if directory structure  is exactly the same.   ## ## First detach ORACLE_HOME from central inventory ##  atlas.10GR2-> pwd /u01/app/oracle/oraInventory/ContentsXML  atlas.10GR2-> grep "HOME NAME" * inventory.xml: inventory.xml: atlas.10GR2->  ## detach OraDb10g_home1 home (get directory from above grep results) ## atlas.10GR2-> $ORACLE_HOME/oui/bin/runInstaller -detachhome ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 Starting Oracle Universal Installer...  No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. The inventory pointer is located at /var/opt/oracle/oraInst.loc The inventory is located at /u01/app/oracle/oraInventory 'DetachHome' was successful. atlas.10GR2->  ## verify using grep ## NOTE now it says REMOVED="T" ## atlas.10GR2-> grep "HOME NAME" * inventory.xml: inventory.xml: atlas.10GR2->   ## ## re-register ORACLE_HOME ##   ## MAKE SURE ALL PROCS ARE DOWN - it does a relink ## atlas.10GR2-> ps -ef | grep oracle   oracle  4824  4822   0   Oct 08 pts/2       0:00 -ksh   oracle  4822  4819   0   Oct 08 ?           0:00 /usr/lib/ssh/sshd   oracle  5352  4864   0 16:57:33 pts/3       0:00 ps -ef   oracle  4862  4859   0   Oct 08 ?           0:01 /usr/lib/ssh/sshd   oracle  4864  4862   0   Oct 08 pts/3       0:00 -ksh   oracle  5353  4864   0 16:57:33 pts/3       0:00 -ksh  ## run clone script ## cd $ORACLE_HOME/clone/bin perl clone.pl ORACLE_HOME="/u01/app/oracle/product/10.2.0/db_1" ORACLE_HOME_NAME="OraDb10g_home1"  Here's a sample output of above command:     atlas.10GR2-> perl clone.pl ORACLE_HOME="/u01/app/oracle/product/10.2.0/db_1" ORACLE_HOME_NAME="OraDb10g_home1"    ./runInstaller -silent -clone -waitForCompletion     "ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1"     "ORACLE_HOME_NAME=OraDb10g_home1" -noConfig -nowait    Starting Oracle Universal Installer...        No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2007-10-    09_04-59-54PM. Please wait ...Oracle Universal Installer, Version     10.2.0.3.0 Production        Copyright (C) 1999, 2006, Oracle. All rights reserved.        You can find a log of this install session at:     /u01/app/oracle/oraInventory/logs/cloneActions2007-10-09_04-59-54PM.log    ...........................................................................    ......................... 100% Done.                Installation in progress (Tue Oct 09 17:00:23 PDT 2007)    .................................................................................                                               81% Done.    Install successful        Linking in progress (Tue Oct 09 17:00:45 PDT 2007)    Link successful        Setup in progress (Tue Oct 09 17:02:33 PDT 2007)    Setup successful        End of install phases.(Tue Oct 09 17:02:43 PDT 2007)    WARNING:    The following configuration scripts need to be executed as the "root" user.    #!/bin/sh    #Root script to run    /u01/app/oracle/product/10.2.0/db_1/root.sh    To execute the configuration scripts:        1. Open a terminal window        2. Log in as "root"        3. Run the scripts        The cloning of OraDb10g_home1 was successful.    Please check '/u01/app/oracle/oraInventory/logs/cloneActions2007-10-09_04-59-54PM.log' for more details.    atlas.10GR2->   ## after above clone completes run ROOT.SH as ROOT ##  atlas.10GR2-> su Password: # # /u01/app/oracle/product/10.2.0/db_1/root.sh Running Oracle10 root.sh script...  The following environment variables are set as:     ORACLE_OWNER= oracle     ORACLE_HOME=  /u01/app/oracle/product/10.2.0/db_1  Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: y    Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: y    Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: y    Copying coraenv to /usr/local/bin ...  Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. #   ## ## Verify inventory can be read by OPATCH ##  atlas.10GR2-> /u01/app/oracle/product/10.2.0/db_1/OPatch/opatch lsinventory Invoking OPatch 10.2.0.3.2  Oracle interim Patch Installer version 10.2.0.3.2 Copyright (c) 2007, Oracle Corporation.  All rights reserved..   Oracle Home       : /u01/app/oracle/product/10.2.0/db_1 Central Inventory : /u01/app/oracle/oraInventory    from           : /var/opt/oracle/oraInst.loc OPatch version    : 10.2.0.3.2 OUI version       : 10.2.0.3.0 OUI location      : /u01/app/oracle/product/10.2.0/db_1/oui Log file location : /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/opatch2007-10-09_17-09-19PM.log  Lsinventory Output file location : /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/lsinv/lsinventory2007-10-09_17-09-19PM.txt  -------------------------------------------------------------------------------- Installed Top-level Products (2):  Oracle Database 10g                                                  10.2.0.1.0 Oracle Database 10g Release 2 Patch Set 2                            10.2.0.3.0 There are 2 products installed in this Oracle Home.   Interim patches (22) :  Patch  6121268      : applied on Wed Sep 26 20:02:34 PDT 2007    Created on 11 Jun 2007, 05:26:11 hrs PST8PDT    Bugs fixed:      6121268  Patch  6121267      : applied on Wed Sep 26 20:02:27 PDT 2007    Created on 12 Jun 2007, 02:58:15 hrs PST8PDT    Bugs fixed:      6121267  Patch  6121266      : applied on Wed Sep 26 20:02:21 PDT 2007    Created on 12 Jun 2007, 02:57:08 hrs PST8PDT    Bugs fixed:      6121266  Patch  6121264      : applied on Wed Sep 26 20:02:15 PDT 2007    Created on 12 Jun 2007, 02:55:35 hrs PST8PDT    Bugs fixed:      6121264  Patch  6121263      : applied on Wed Sep 26 20:02:08 PDT 2007    Created on 12 Jun 2007, 02:54:23 hrs PST8PDT    Bugs fixed:      6121263  Patch  6121261      : applied on Wed Sep 26 20:02:02 PDT 2007    Created on 11 Jun 2007, 01:57:45 hrs PST8PDT    Bugs fixed:      6121261  Patch  6121260      : applied on Wed Sep 26 20:01:56 PDT 2007    Created on 11 Jun 2007, 00:43:23 hrs PST8PDT    Bugs fixed:      6121260  Patch  6121258      : applied on Wed Sep 26 20:01:45 PDT 2007    Created on 12 Jun 2007, 08:36:08 hrs PST8PDT    Bugs fixed:      6121258  Patch  6121257      : applied on Wed Sep 26 20:01:39 PDT 2007    Created on 12 Jun 2007, 01:54:53 hrs PST8PDT    Bugs fixed:      6121257  Patch  6121250      : applied on Wed Sep 26 20:01:32 PDT 2007    Created on 11 Jun 2007, 21:47:03 hrs PST8PDT    Bugs fixed:      6121250  Patch  6121249      : applied on Wed Sep 26 20:01:17 PDT 2007    Created on 11 Jun 2007, 21:46:24 hrs PST8PDT    Bugs fixed:      6121249  Patch  6121248      : applied on Wed Sep 26 20:01:11 PDT 2007    Created on 11 Jun 2007, 08:52:17 hrs PST8PDT    Bugs fixed:      6121248  Patch  6121247      : applied on Wed Sep 26 20:01:00 PDT 2007    Created on 10 Jun 2007, 23:58:00 hrs PST8PDT    Bugs fixed:      6121247  Patch  6121246      : applied on Wed Sep 26 20:00:55 PDT 2007    Created on 12 Jun 2007, 08:53:32 hrs PST8PDT    Bugs fixed:      6121246  Patch  6121245      : applied on Wed Sep 26 19:59:57 PDT 2007    Created on 10 Jun 2007, 23:05:51 hrs PST8PDT    Bugs fixed:      6121245  Patch  6121244      : applied on Wed Sep 26 19:59:42 PDT 2007    Created on 10 Jun 2007, 23:01:41 hrs PST8PDT    Bugs fixed:      6121244  Patch  6121243      : applied on Wed Sep 26 19:59:27 PDT 2007    Created on 10 Jun 2007, 22:16:48 hrs PST8PDT    Bugs fixed:      6121243  Patch  6121242      : applied on Wed Sep 26 19:59:21 PDT 2007    Created on 11 Jun 2007, 21:43:02 hrs PST8PDT    Bugs fixed:      6121242  Patch  6121183      : applied on Wed Sep 26 19:59:13 PDT 2007    Created on 12 Jun 2007, 01:52:07 hrs PST8PDT    Bugs fixed:      6121183  Patch  6079591      : applied on Wed Sep 26 19:59:07 PDT 2007    Created on 14 Jun 2007, 03:24:12 hrs PST8PDT    Bugs fixed:      6079591  Patch  5556081      : applied on Wed Sep 26 19:27:35 PDT 2007    Created on 9 Nov 2006, 22:20:50 hrs PST8PDT    Bugs fixed:      5556081  Patch  5557962      : applied on Wed Sep 26 19:27:24 PDT 2007    Created on 9 Nov 2006, 23:23:06 hrs PST8PDT    Bugs fixed:      4269423, 5557962, 5528974   --------------------------------------------------------------------------------  OPatch succeeded. atlas.10GR2->
source:-http://kb.dbatoolz.com/ex/pwpkg.dp?p_key=11&p_what=detail&p_sr_id=2694&p_sc_id=19&p_debug=&p_search=&p_suser=&p_sdate=#top