Tuesday, December 24, 2013

In-place upgrade OBIEE 11.1.1.6.0 to 11.1.1.7.0

Requirement:

Upgrade OBIEE 11.1.1.6.0 to 11.1.1.7.0 in Linux


High level Steps:

It includes 3 high level steps mainly

1. OBIEE S/W upgrade
2. Upgrade repository schemas
3. Upgrade domain


Steps in detail:

1. Pre-Upgrade Steps

Stop all the BI components (AdminServer, Managed Server, opmn components and Nodemanager)

Take a backup of MW Home and Inventory

2. OBIEE S/W Upgrade

Download OBIEE 11.1.1.7.0 for the platform

Invoke runInstaller

Intallation Type Screen: Choose Software Only Install

Intallation Location Screen:The installer would automatically choose the current MW and BI installation directories.

Click Intall

3. Upgrade MDS and BIPLATFORM schemas using Patch Set Assistant (psa)

psa is available in $MW_HOME/oracle_common/bin

Invoke psa (./psa)

Select Component Screen: Choose only "Oracle Bussiness Intelligence". Both BIPLATFORM and MDS schema would be selected by default

Prerequisites Screen: Select both prerequisite check boxes 1. backup completed and 2. DB version certified by FMW upgrade

MDS Schema Screen: Give DB connection information and DB user "SYS as sysdba" and press "Connect" button
Give MDS and BIPLATFORM username and password.

Examine Screen: The status of the examination process would be shown in next page. Both status should be of status "Succeeded".
Click Upgrade button

4. Start nodemanager and AdminServer

Start nodemanager and Admin server. This is mandatory since during domain updation both Admin server and bi server has to be retsrated by the configuration utility.


5. Domain Upgrade using config utility

Invoke configuration utility from $MW_HOME/Oracle_BI1/bin folder

./config.sh

Create, Scale out or Extend Screen: Choose Update BI Domain option
Provide the Adminserver hostname, port number and user details

Update BI Domain Details Screen: Verify that all the paths chosen by the installer are accurate.

Click configure and Press Finish button when done

The Configuration Assistant will restart AdminServer and start the managed server, and OPMN components.

Verify that all the URLs are reachable and all components are up.



Weblogic 10.3.5 to 10.3.6 in-place upgrade

Action:

Download and apply the upgrade patch

Steps

1. Download the upgrade patch from metalink. The patch number is 13529623

You can also search for the correct patch in metalink using the "Patches &   Updates" tab

In the "Patch Search" box provide the following details 

Oracle WebLogic Server as Product, 10.3.6 as Release, Linux X86 as Platform (Linux 64 bit was my platform) and  give Upgrade in the Description field to reduce the number of search results.

2. Unzip the downloaded file. 
The resultant file would be wls1036_upgrade_generic.jar file.

3. Backup MW folder and Inventory directory
Shutdown all the WLS services and backup the directories. Always good to     have the backup in place before a major upgrade.

4. Invoke the upgrade installer and follow the intrauctions in the installer
java -d64 -jar wls1036_upgrade_generic.jar

5. Choose "Use and existing middleware home" option and pick the current MW Home location for upgrade. All the screens are self explanatory.

6. Click complete.
Files will now be upgraded to 10.3.6.0 Maintenance level

7. Startup the Weblogic Admin server
The startup log will show the latest maintenance level of 10.3.6.0






Monday, December 16, 2013

Get user creation metadata and all roles/privileges granted to the user

Requirement:

Create a database user in another DB based on the user available in the source DB with the same object/DB level grants.

DB Version: 11.2.0.2.0


Steps

1. Find the User creation script from source database

Extract the DDL used for the user creation using dbms_metadata

SQL> select dbms_metadata.get_ddl('USER','MYUSER') from dual;

DBMS_METADATA.GET_DDL('USER','MYUSER')
---------------------------------------------------------

   CREATE USER "MYUSER" IDENTIFIED BY VALUES 'S:2526C5A32BD6A196B2A
2F2CC6F5E88D4A5932593709E4CC838BFDC036C27;C84965A9EC7311F4'
      DEFAULT TABLESPACE "USERS"  TEMPORARY TABLESPACE "TEMP"


2. Get system privileges granted to the user using dba_sys_privs

SQL> select PRIVILEGE from dba_sys_privs where GRANTEE='MYUSER';

PRIVILEGE
----------------------------------------
UNLIMITED TABLESPACE


3. Get the roles granted to the user using dba_role_privs

SQL> select GRANTED_ROLE from dba_role_privs where GRANTEE='MYUSER';

GRANTED_ROLE
------------------------------
CONNECT
DBA


4. Get all object grants of the user using dba_tab_privs

SQL> select OWNER,TABLE_NAME,PRIVILEGE from dba_tab_privs where GRANTEE='MYUSER';

no rows selected

Tuesday, November 19, 2013

TNS-01106: Listener using listener name has already been started

Scenario:

DB version: 11.2.0.3.0

The following error was found on trying to start a newly created listener in a shared DB server with multiple Oracle Homes.

TNS-01106: Listener using listener name  has already been started

Verification:

From the error message it seem to happen because of a duplicate LISTENER name, but it was not the case to be. I tried to give different names to the listener but still was getting same error on listener start up

Solution:

A detailed analysis of the error helped me to find that it can happen if any other listener is already listening on one of the parameters in the ADDRESSes specified. In my case the port number I tried to give for the new listener was already in use. But since the error normally expected with a port number conflict is different (TNS-12542: TNS:address already in use), I was wasting my time concentrating on LISTENER name.

I changed the TCP port number in the listener.ora file and successfully started the listener.


ORA-27046: file size is not a multiple of logical block size with SPFILE

Scenario:

Creation of pfile from the existing spfile. The following error stack was seen.

ERROR at line 1:
ORA-01565: error in identifying file '?/dbs/spfile@.ora'
ORA-27046: file size is not a multiple of logical block size
Additional information: 1

Reason:

The SPFILE is corrupted. The corruption could have happened because of the manual editing of the binary SPFILE. There is no way the spfile can be repaired or modified manually because spfile is a binary file.

Solution:


1. Restore from backup (If we have a recent backup of spfile then can restore it)
2. Create pfile from alert log file. We can get the list of non-default parameters from the alert log at startup time and with that can create pfile.
3. Get the parameters from the corrupted spfile using utilities like strings. We can even try using tools like "vi" to open the spfile and get the parameters.
Eg: strings spfile$ORACLE_SID.ora init$ORACLE_SID.ora

Then create spfile from the pfile.




Thursday, November 14, 2013

Enable archive log mode in RAC database

Case:

To enable archive log mode in an 11gR2 RAC database.


Steps:

Need to shutdown all instances of the RAC and then "MOUNT"  a single node to switch it to ARCHIVELOG mode. It is recommended to have more than one archive destination.

It is better to keep ArchiveLogs on shared storage because any node of the cluster can be used to backup the ArchiveLogs. If we are using FRA as backup location then it makes sense to keep the files there.


ALTER SYSTEM SET log_archive_dest_1='location=/app/db_backup/archive/ORCL/' SCOPE=spfile; 
ALTER SYSTEM SET log_archive_format='arch_%t_%s_%r.arc' SCOPE=spfile;

Set cluster_database parameter to false since we have to mount the database in exclusive mode. 
.
ALTER SYSTEM SET cluster_database=FALSE SCOPE=spfile;

srvctl stop database -d ORCL

STARTUP MOUNT;

Since the datafiles and spfile are shared between all instances this operation only has to be done from a single node.

ALTER DATABASE ARCHIVELOG;
ALTER SYSTEM SET cluster_database=TRUE SCOPE=spfile;
SHUTDOWN IMMEDIATE;

srvctl start database -d ORCL


Disabling:

Follow similar steps to disable archiving.

ALTER SYSTEM SET cluster_database=FALSE SCOPE=spfile;

srvctl stop database -d ORCL

STARTUP MOUNT;

alter database noarchivelog;

ALTER SYSTEM SET cluster_database=TRUE SCOPE=spfile;

SHUTDOWN IMMEDIATE;

srvctl start database -d ORCL

Tuesday, October 29, 2013

Command to find Oracle ADF/Jdeveloper version

Question:

How to find out the version of ADF or JDeveloper installed particularly after an upgrade of ADF/ADR

Weblogic version: 11g

Solution:


Navigate to MW_HOME/oracle_common/modules/oracle.adf.model_11.1.1  location

Use the command as follows from the shell/cmd

java -cp adfm.jar oracle.jbo.common.PrintVersion

We would get an output similar to the one below

BC4J Version is:  11.1.2.64.36

Another method:

1. Navigate to ADF Home folder (cd $MW_HOME/oracle_common) and search for adf-share-support.jar file.
  
   Eg: find . -name adf-share-support.jar 
       ./modules/oracle.adf.share_11.1.1/adf-share-support.jar

2. Move to the file's folder and copy the file to the user's Home folder.

   Eg: cp adf-share-support.jar $HOME/

3. Extract the contents of the jar file
   
   Eg: cd $HOME
        jar -xf adf-share-support.jar

   The above command extracts the jar file to a folder named META-INF


4. Move to META-IN and read the file MANIFEST.MF
   
   cat MANIFEST.MF

   Look for Oracle version and label in the file we opened
   Oracle-Version: 11.1.2.4.39.64.36.1

   Oracle-Label: JDEVADF_11.1.2.4.0_GENERIC_130421.1600.6436.1


Friday, October 25, 2013

ORA-28007: the password cannot be reused

Scenario:

Trying to reset to the existing password throws following error.

ERROR at line 1:
ORA-28007: the password cannot be reused

Solution:

Check the user profile set for the user. Check for the existence of a password verify function in the profile.

Set the password_verify_function to NULL

Another reason could be that the PASSWORD_REUSE_MAX limit has reached. If so set to a higher value or to UNLIMITED and reset the password as shown below.

SQL> alter profile default limit PASSWORD_REUSE_MAX unlimited;

SQL> alter user USERNAME identified by <PREV_PWD>;




Wednesday, October 16, 2013

Step by step: Set up Transparent Data Encryption(TDE) in Oracle RAC database

Requirement:

To set up TDE for an eight node RAC database. DB version 11.2.0.2.0. Below given are the steps I used. 

Steps:


1. Create directory structure for wallet (Steps done as root user)

Oracle recommends placing the Oracle Wallet outside of the $ORACLE_BASE directory tree to avoid accidentally storing the wallet with the encrypted data on a backup tape. So we are going to store the wallet files under /etc as shown below. In my case database name is ORCL.

a. [root@node01 etc]# mkdir -pv ORACLE/WALLETS/ORCL

b. [root@node01 etc]# chown -R oracle:oinstall ORACLE

c. [root@node01 etc]# chmod -R 700 ORACLE


2. Configure sqlnet.ora file in first node (Step done as oracle user).

Edit sqlnet.ora and add following entry.


ENCRYPTION_WALLET_LOCATION =
  (SOURCE = (METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY=/etc/ORACLE/WALLETS/ORCL)))


3. Create the wallet

The following SQL command will initialize the wallet and add master encryption key. This creates a file by name ewallet.p12 under Wallet directory

SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "<WalletPwd>";

System altered.


[oracle@node01 ORCL]$ pwd
/etc/ORACLE/WALLETS/ORCL
[oracle@node01 ORCL]$ ls -ltr
total 4
-rw-r--r-- 1 oracle dba 1573 Oct 10 22:34 ewallet.p12

The wallet would be already open. Else you can use the SQL command to open it.

ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "<WalletPwd>";


4. Reduce permission on wallet file to 600

[oracle@node01 ORCL]$ chmod 600 ewallet.p12
[oracle@node01 ORCL]$ ls -ltr
total 4
-rw------- 1 oracle dba 1573 Oct 10 22:34 ewallet.p12



5. Enable Autologin. This will create cwallet.sso file in Wallet directory

orapki wallet create -wallet /etc/ORACLE/WALLETS/ORCL -auto_login


6. Make Wallet Immutable: So that even root cant delete it. Execute this as root user

Set the immutable but with chattr +i command as show below.

[root@node01 ORCL]$ ls -ltr
total 8
-rw------- 1 oracle dba      1573 Oct 10 22:34 ewallet.p12
-rw------- 1 oracle oinstall 1651 Oct 10 22:39 cwallet.sso

[root@node01 ORCL]$ chattr +i ewallet.p12
[root@node01 ORCL]$ chattr +i cwallet.sso


7. Copy Wallet and sqlnet.ora to all the nodes of the RAC maintaining same directory structure and permissions.

Once TDE is enabled on the first instance, the wallet and the local sqlnet.ora file need to be copied to all other instances and manually opened for the master key to be loaded into each instance’s memory. Use a tool like "scp" to copy wallet files and sqlnet.ora to all nodes



8. Open wallet in all nodes

Wallet should be manually opened in all nodes so that master key gets loaded in to each of the instances' memory.  

SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "<WalletPwd>";

System altered.

The TDE is now configured and ready to use.

Note:


Wallets must be reopened after an instance restart and can be closed again to prevent access to encrypted columns.

ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "<WalletPwd>";
ALTER SYSTEM SET ENCRYPTION WALLET CLOSE;

To reverse immutability use -i option


chattr -i ewallet.p12
chattr -i cwallet.sso

Sunday, October 13, 2013

SOA Cluster Coherence Error: Detected another cluster senior, running on an incompatible protocol

Issue:

Platform: OEL 5
SOA Version: 11.1.1.5.0 for AIA

Had set up SOA cluster with two managed servers running on same server and managed as part of a cluster.

On starting up the second managed server, it always fail throwing following error stack.


<Oct 10, 2013 7:54:48 PM GMT> <Error> <Coherence> <BEA-000000> <2013-10-10 19:54:48.830/1032.427 Oracle Coherence GE 3.7.1.1 <Error> (thread=Cluster, member=2): Detected another cluster senior, running on an incompatible protocol at null manual intervention may be required>

Reason:

By default the coherence.clusterport is set to 9778 and it seems to have an issue with this version. Change it to any other valid/available port number and restart the service.

Solution:

Move to $DOMAIN_HOME/bin directory

Take a backup of setDomainEnv.sh file.

Open setDomainEnv.sh and search for clusterport and come to the following line

EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Dsoa.archives.dir=${SOA_ORACLE_HOME}/soa -Dsoa.oracle.home=${SOA_ORACLE_HOME} -Dsoa.instance.home=${DOMAIN_HOME} -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9777 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=${WL_HOME}/server/lib/DemoTrust.jks"

Change the port number as shown above. By default it would be 9778. Change it to any other valid port number. In above example I changed it to 9777.

Restart the SOA services.

Wednesday, October 9, 2013

AS10g Forms: The patch directory doesn't match the patch id

Scenario:

Applying a Forms Patch 17459191
Version: 10.1.2.3.0
Platform: Windows 32 Bit

During Patch Application the opatch errored out

Error:

The patch directory 17459191 doesn't match the patch id. 
ERROR: OPatch failed during pre-reqs check. 
OPatch returns with error code = 150 


Reason:

Opatch error code = 150 is generic. When ever the patching operation has not met all the pre-requisites then this error is thrown.

In this case two pre-requisites were mentioned under NOTES heading in the readme.txt

Solution:

Go through the readme.txt carefully and complete all the steps mentioned as pre reqs.

In my case, two pre reqs were there

1. Another patch which was of older version had to be applied - Patch 9593176 
2. OUI to be upgraded after 9593176 patch. 
If OUI is not upgraded, then another error OPATCH_JAVA_ERROR=CHECKCONFLICT: OPATCH CANNOT PROCESS OVERLAY PATCHES BECAUSE OF NO OUI SUPPORT. PLEASE    TAKE LATEST OUI 10.1 PATCHSET FROM "MY ORACLE SUPPORT" AND TRY AGAIN. 
Follow Support Doc 1301320.1 for the proper upgrade.

Once both conditions were satisfied, I was able to apply the patch.

Tuesday, October 8, 2013

OBIEE startup error: Your Java heap size might be set too high.

Scenario:

Trying to startup OBIEE 11.1.1.5.0 in a Windows 2007 server which had low physical memory allocated.

Error:

Could find the following error message in the startup prompt

Caused By: com.bea.xml.XmlException: class allocation, 106505772 loaded, 102M footprint in check_alloc (src/jvm/model/classload/classalloc.c:213) 21547 bytes requested.
Java heap 1024M reserved, 512M committed
Paged memory=1455520K/2147352575.
Your Java heap size might be set too high.

Reason:

The server's physical memory was exhausted as other applications were also running. 

Reduced the heap size memory values in setOBIDomainEnv.cmd file. But that too did not help.

Solution:

Increased the Virtual Memory of the machine. Set it to higher range in D: Drive manually

Restarted the OBIEE service.

OBIEE 11.1.1.5.0: Could not create the Java virtual machine in Windows 2007

Scenario:

Starting up of OBIEE 11.1.1.5.0.
Server: Windows 2007 32 Bit

Received following error on startup

[ERROR][memory ] Could not commit 1048576KB heap memory at 0x10040000.
Could not create the Java virtual machine.

This means the system was unable to allocate 1024MB of memory for the JVM. 

Solution:

The windows server had 4GB physical memory and about 90% of that was used by other processes. So the system was unable to allocate the guranteed memory for JVM

I reduced the current heap memory settings in the env file. So the heap settings was reduced to 512-1024M in setDomainEnv.cmd file.

Restarted the services and this brought up weblogic.

Monday, October 7, 2013

Discgroup is not compatible for database usage while using 11g dbca with Custom Database template.

Scenario:

DB creation using DBCA in exadata machine. An 8 node RAC DB is being created.

DB Version: 11.2.0.3.0

Error:

Discgroup "+DATA_SLCC24" is not compatible for
database usage. Database compatible attribute of
diskgroup should be the same or lower than the database
compatible initialization parameter value.
Database compatible parameter is set to "11.2.0.0.0"
and the diskgroup database compatibility attribute is set
 to "11.2.0.2". 

Reason: 


The error denotes that the compatible parameter set in the database initialization parameter is lower than the compatible.rdbms set for the diskgroup in ASM. In this case the database compatible parameter was set to 11.2.0.0.0 and the diskgroup compatible.rdbms is set to 11.2.0.2.0.

We can verify the ASM compatible settings by issuing following query after connecting to the ASM instance.

select group_number, name, compatibility, database_compatibility from v$asm_diskgroup;

ASM instances with a compatible value set to equal or greater than compatible.asm can mount the diskgroup. So in this case we have to change the setting in DB level.

DBCA uses in built templates to create the DB. The templates are by default stored in $ORACLE_HOME/assistants/dbca/templates folder.

We need to find out and edit the attribute in the right template and reinvoke dbca.

Solution:

Move to $ORACLE_HOME/assistants/dbca/templates location. Find the template that is used by you. I have used "Custom Database" template. The template file for "Custom Database" option is New_Database.dbt. So take  a backup of that template for safety and open the "New_Database.dbt" file using "vi" editor.

Search for the string compatible. Change its value to be equal to (or greater than) the corresponding ASM value. For example as shown below


   <initParam name="compatible" value="11.2.0.0.0"/>    => This was my original setting

I changed it to 11.2.0.2.0 and saved the file and quit


Exit from the dbca session if you have not already, and restart dbca. It should go through now.

Friday, September 27, 2013

ORA-04031 during IMPDP 11g

Scenario:

Received the following error while trying to import data using datapump.

ORA-39097: Data Pump job encountered unexpected error -4031
ORA-39065: unexpected master process exception in MAIN
ORA-04031: unable to allocate 2024 bytes of shared memory ("streams pool","unknown object","streams pool","kodpaih3 image")
Job "SYS"."SYS_IMPORT_FULL_01" stopped due to fatal error at 11:19:39

Version:

Oracle EE 11.2.0.2.0

Reason:

The ORA-4031 error message in the above error stack, indicates a problem with the Streams pool. The database, where the operation failed, was set to follow AMM and individually  STREAMS_POOL_SIZE instance parameter was set to 0. Data Pump operations utilize the Oracle Streams pool and the size of the stream role in ideal case should be increased automatically on demand. In this case that has not happened and resulted in the above error.

Solution:

Set STREAMS_POOL_SIZE>0 to guarantee a minimum size for the streams pool when using DB  that is set to use AMM (or even ASSM).

alter system set streams_pool_size=150m;

Restarted the IMPDP operation. This time the ORA-4031 errors were missing.

 

Thursday, September 26, 2013

How to change the weblogic Admin server port number

Task:

To change the port number of weblogic Admin server. The port number was set to the default value of 7001.

Version:

Weblogic 12.1.2.0.0

Action:

The easiest method is to simply update the port number in the weblogic conole page and save the change. 

Connect to weblogic conole using the current port number

Eg: http:<hostname>:7001/console

Navigate to Servers => Click on the AdminServer(admin)
We would be at the configuration tab of AdminServer now. Just update the value in the Listen Port: column to new value. For eg: 7010 and then click Save button, available at the bottom of that page. That would make the trick.

We can immediately verify the change by accessing the URL through the new port number. But the page would be accessible through both new and old port numbers until we restart the AdminServer. So better to restart the AdminServer so that the console URL is accessible only through the modified port number.

Alternative Approach:

Add the port number in the config.xml file.

Stop the weblogic services. Take a backup of config.xml file. If we are using the default port number of 7001, then its entry would not be there in the file. So we have to add the tags to reflect the new port number.

Add following two lines to the config.xml under <Server> <name>AdminServer</name> tag

<listen-port>7010</listen-port>
<listen-port-enabled>true</listen-port-enabled>

Restart the AdminServer. Now it would be running in new port


Wednesday, September 18, 2013

Command to find weblogic version

Command:

weblogic.version

Usage:

   1. Source the environment variables

       . $WL_HOME/server/bin/setWLSEnv.sh

   2. Issue weblogic.version command

       java weblogic.version

Output:

WebLogic Server 10.3.6.0.5 PSU Patch for BUG16619891 Tue Apr 23 13:24:29 IST 2013
WebLogic Server 10.3.6.0  Tue Nov 15 08:52:36 PST 2011 1441050 

Use 'weblogic.version -verbose' to get subsystem information

Use 'weblogic.utils.Versions' to get version information for all modules


You can use "java weblogic.version -verbose" and "java weblogic.utils.Versions" to get more granule level details.
In the above example the weblogic version is 10.3.6.0.5 with Jul 2013 PSU applied.

Change GLOBAL_NAME in oracle database

Scenario:


Wanted to change the global_name of a database, which was having SID + Domain name. 

Version: 12.1.0.1.0 (Applicable to all versions)

SQL> select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
ORCL.MYDOMAIN.COM

Official documents mentioned to use "ALTER DATABASE RENAME GLOBAL_NAME TO <NEW_NAME>;" which did not work in my case.

Solution:

Since global_name is a table, from where we query for the global_name, we can update it to make changes.

SQL> desc global_name
 Name                                          Null?    Type
 ----------------------------------------- -------- ----------------------------
 GLOBAL_NAME                                        VARCHAR2(4000)

SQL> update global_name set global_name='ORCL';

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
ORCL

Tuesday, September 10, 2013

Linux-x86_64 Error: 12: Cannot allocate memory on RAC conversion using rconfig in NFS mount

Scenario:

A stand alone database is getting converted to RAC/ASM using rconfig command, which internally uses RMAN for the conversion.
DB Version: 11.2.0.3.0

The DB conversion was failing every time on the redo logs addition phase of the conversion.
The source database was in an NFS shared storage. And the conversion was to an Exadata box.

Error:

ORA-00321: log 1 of thread 1, cannot update log file header
ORA-00312: online log 1 thread 1: '/redo_log_path/log3.dbf'
ORA-27072: File I/O error
Linux-x86_64 Error: 12: Cannot allocate memory


Verification:

Since there was not much beyond the above mentioned error in any of the log files, it was difficult to fix the issue. The DB folder had to be physically removed from the ASM instance each time rconfig was run, on the basis of a trial and error approach. This is because the DB instance creation was succesfull every time when rconfig is run.

Luckily we got hold of a support doc which had resemblance with the issue we had.

The Doc ID: 1532488.1 was about failure of RMAN and EXPDP in NFS mount, when Exadata11.2.3.2.1 image is used.

Solution:

Applied patch 16432033 on all compute nodes.
Restarted the RAC conversion after restoring DB from a backup

The patch helped in fixing the error and RAC/ASM conversion was successful.

Note: The patch is not persistent across reboots.  If the system is rebooted, the customer needs to reapply the patch each time 




Thursday, August 29, 2013

ORA-25152: TEMPFILE cannot be dropped at this time during rconfig

Scenario:

Conversion of single instance database in shared NFS storage to RAC/ASM using "rconfig" command. Database version is 11.2.0.3.0

Received following error in the "Setting TEMP tablespace" step.

<ErrorDetails>
             oracle.sysman.assistants.rconfig.engine.StorageCreationException: o
racle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-25152: TEMPFILE cannot be dropped at this time
Operation Failed. Refer logs at /u01/app/oracle/cfgtoollogs/rconfig/rconfig_05_14_13_21_18_17.log for more details.
</ErrorDetails>

Verification:

The database had 4 temp files out of which 2 were succesfully moved to ASM. The other two files had issues.

Normally this error is noticed while trying to drop a temporary tablespace which is used by users at the execution of the command.

In this case on RAC conversion there was not much chance to identify and kill the process using temp file.

Solution:

a) Drop the tempfiles before trying rconfig. 
b) Then restart the RAC conversion using rconfig. 
c) After RAC conversion add the temp files which was dropped earlier.

Use the following command to drop temp files.

alter database tempfile '<PATH>/temp02.dbf' drop including datafiles;


I dropped both the problematic tempfiles and then removed the files physically from ASM. Later I reinitiated the rconfig command and the conversion happened.

After RAC conversion add both tempfiles, which were dropped earlier.

alter tablespace temp2 add tempfile size 524288000; 

Erros During IMPDP: ORA-39070: Unable to open the log file

Scenario:

user is trying to import a dump file to a schema in Oracle 11.2.0.1.0, but ends up getting following error

Command used:

impdp system/<PWD> DIRECTORY=DUMP_DIR DUMPFILE=TEST1.DMP REMAP_SCHEMA=USER2:USER1 REMAP_TABLESPACE=USERS:test table_exists_action=replace LOGFILE=Test1_IMPORT.log 

ORA-39002: invalid operation 
ORA-39070: Unable to open the log file. 
ORA-29283: invalid file operation 
ORA-06512: at "SYS.UTL_FILE", line 536 
ORA-29283: invalid file operation 

Verification:

This is essentially a permission issue, because of which IMPDP command cannot complete its work.

We need to complete the following three steps to identify where the permission issue is.


1. Directory Name verification: From the dba_directories view the DUMP directory is physically present in the file system and the path name and case matches exactly.
2. DB user permissions: The DB user who is performing the import is SYSTEM, a DBA user who will have required permissions. Even all permissions were granted again.
3. Oracle user permissions: Check oracle user's (DB S/W Owner) permissions on the DUMP directory. Try to create a file in the DUMP directory as "oracle" user.

Reason:

The "oracle" user (DB S/W owner) did not have read/write permissions on the physical directory where dump file was placed (directory specified in IMPDP command)

Solution:

Grant read and write permission on the parent directory and its subdirectories to "oracle" user and restart the import.

Friday, August 16, 2013

ORA-39127: unexpected error from call to "EXFSYS"."DBMS_EXPFIL_DEPASEXP" during datapump export

Scenario:

A full database export using both conventional export utility and datapump has so many errors and warnings in their log file.

Conventional Export log


EXP-00008: ORACLE error 6550 encountered
ORA-06550: line 1, column 14:
PLS-00201: identifier 'EXFSYS.DBMS_EXPFIL_DEPASEXP' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
EXP-00083: The previous problem occurred when calling EXFSYS.DBMS_EXPFIL_DEPASEXP.schema_info_exp


Datapump Export log


ORA-39127: unexpected error from call to "EXFSYS"."DBMS_EXPFIL_DEPASEXP"."SYSTEM_INFO_EXP"
ORA-44002: invalid object name
ORA-06512: at "SYS.DBMS_ASSERT", line 316
ORA-06512: at "SYS.DBMS_METADATA", line 9263
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
ORA-39127: unexpected error from call to "EXFSYS"."DBMS_EXPFIL_DEPASEXP"."SYSTEM_INFO_EXP"
ORA-44002: invalid object name
ORA-06512: at "SYS.DBMS_ASSERT", line 316


Reason: 


EXFSYS schema is missing in the database. To install the EXFSYS schema execute the script "catexf.sql" as SYS user.

Solution:

conn as sysdba and execute


@?/rdbms/admin/catexf.sql


Restart the export now. There would not be similar errors.

Monday, August 12, 2013

Interim Patch is holding the lock from this patch

Scenario:

Applying a forms patch on 10g AS. On running "opatch apply" the patching has failed due to

Interim Patch <PatchNo> is holding the lock from this patch <PatchNo>,
probably due to previous unsuccessful operation

Reason:

Opatch creates a file patch_locked under %ORACLE_HOME%\.patch_storage directory while applying a patch.If that patch application has been interrupted or canceled , or the patch has not been properly rollbacked, the patch_locked file may not be removed. Because of this subsequent opatch run can find this patch and will fail. 

Solution:

Delete or rename the patch_locked file. 
Reapply the patch using opatch utility.

Tuesday, August 6, 2013

AIA deployment Exception: ORA-01882: timezone region not found


Scenario:

AIA Foundation Pack installation on top of SOA 11.1.1.5.0 in a Windows server is failing during its deployment phase.

Error:

JBO-29000: Unexpected exception caught: java.sql.SQLDataException, msg=ORA-01882: timezone region not found

     [wlst]   File "D:\Middleware_App\FMW\AIA_App\Infrastructure\Install\AID\lib\py\deployApplication.py", line 49, in ?
     [wlst]   File "<iostream>", line 376, in activate
     [wlst]   File "<iostream>", line 1848, in raiseWLSTException
     [wlst] WLSTException: Error occured while performing activate : Error while Activating changes. : [HTTP:101216]Servlet: "HttpSoapService" failed to preload on startup in Web application: "AIAValidationSystemServlet".
     [wlst] oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.sql.SQLDataException, msg=ORA-01882: timezone region not found
     [wlst] at oracle.jbo.server.OracleSQLBuilderImpl.setSessionTimeZone(OracleSQLBuilderImpl.java:5541)


Reason: 

The timezone set or picked from AIA host is not valid in the AIA database or is or is not set to blank in the soa managed server.


Troubleshooting:

1. Verified that no Timezone is explicitly set for the SOA managed server

To verify => Navigate to WebLogic console -> Environment, Servers, soa_server1.

Check the Server Start tab, Arguments field.

In my case it was blank

2. Check the AIA database for current settings

Executed following SQL commands from AIA server to the AIA database

SQL> select dbtimezone from dual;

DBTIME
------
+00:00

SQL> select sessiontimezone from dual;

SESSIONTIMEZONE
--------------------------------------------------------
-05:00

3. Check the Timezone set for the server.

It was set to "central Time (US and Canada)"


Solution:

From the database I queried for possible Timezone names with names including Central (Since in O/S level it is set to Central Time, i filtered)

select * from v$timezone_names where tzname like '%entral%';

From the list, I decided to use "Canada/Central" as the data center hosting the server was in Canada


Then in the SOA_SERVER1 Arguments field (WebLogic console -> Environment, Servers, soa_server1 -> Server Start), I included the following line and saved.

-Duser.timezone=Canada/Central

To double confirm I restarted the SOA_SERVER1 managed server.

Re-initiated the AIA installation after cleaning up the previous failed installation. This time it was successful.


Wednesday, July 31, 2013

Unable to set the activation state to true for the application JDBC Data Source-rac8

Scenario:

One of the ManagedServer was going to Admin State whenever it was tried to start. From the log files it was clear that it is due to some kind of corruption/wrong configuration with one of the eight data sources.

"<Jul 31, 2013 6:30:29 AM PDT> <Error> <Deployer> <BEA-149231> <Unable to set the activation state to true for the application 'ManServer1 JDBC Data Source-rac8'.
weblogic.application.ModuleException: at weblogic.jdbc.module.JDBCModule.activate(JDBCModule.java:388) at weblogic.application.internal.flow.ModuleListenerInvoker.activate(ModuleListenerInvoker.java:227) at weblogic.application.internal.flow.DeploymentCallbackFlow$2.next(DeploymentCallbackFlow.java:541) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52) at weblogic.application.internal.flow.DeploymentCallbackFlow.activate(DeploymentCallbackFlow.java:175)
        Truncated. see log file for complete stacktrace
Caused By: weblogic.common.ResourceException: Failed to bind remote object (ClusterableRemoteRef(7480293907351086147S:host02.mydomain:base_domain:ManServ1 null)/297    [weblogic.jdbc.common.internal.RemoteDataSource]) to replica aware stub at ManServ1-rac7(ClusterableRemoteRef(7480293907351086147S:host02.mydomain:base_domain:ManServ1 [7480293907351086147S:host02.mydomain:bas_domain:ManServ1/296])/296        [weblogic.jdbc.common.internal.RemoteDataSource])"


Reason:

Permission issue with a jdbc config file for one of the data sources. During startup weblogic was unable to read the respective configuration file from $DOMAIN_HOME/config/jdbc location. Because of that during startup of the managed server it got updated to a different JNDI Name ( This we can see from Weblogic console page => Services => DataSources => Check for the failing data source name and its respective JNDI Name).


Solution:


Update the JNDI Name in the console and activate changes.

Then change the ownership of the corresponding .xml file in $DOMAIN_HOME/config/jdbc to the WLS S/W owner.

Stop the managed Server which is in Admin state and restart it.

It should be back to Running mode now


Unable to start the Weblogic AdminServer 11G: Server subsystem failed. Reason: java.lang.NullPointerException

Scenario:

Users were unable to start the AdminServer ( weblogic 10.3.6). The startup failed with following error

####<Jul 31, 2013 4:47:04 AM PDT> <Critical> <WebLogicServer> <host02> <AdminServer> <main> <<WLS Kernel>> <> <> <1375271224766> <BEA-000386> <Server subsystem failed. Reason: java.lang.NullPointerException 
java.lang.NullPointerException at weblogic.management.internal.PendingDirectoryManager.find(PendingDirectoryManager.java:571) at weblogic.management.internal.PendingDirectoryManager.find(PendingDirectoryManager.java:556) at weblogic.management.internal.PendingDirectoryManager.find(PendingDirectoryManager.java:575)  at weblogic.management.internal.PendingDirectoryManager.find(PendingDirectoryManager.java:556) at weblogic.management.internal.PendingDirectoryManager.getAllFiles(PendingDirectoryManager.java:413) at weblogic.management.provider.internal.EditAccessImpl.<init>(EditAccessImpl.java:165)

Reason:

One user stopped and restarted the Admin service as "root" user instead of the S/W owner (which was oracle). He also tried to create datasources for the application after starting up the weblogic Admin server.

Impact:

The action modified the ownership and permissions on many of the domain's folders to root. This was stopping the Admin server from coming up.

Tried few work around like renaming the AdminServer folder and then starting up, removing the replicas.prop file under AdminServer/data/ldap/conf directory and renaming WLS_DIAGNOSTICS000000.DAT file under AdminServer/data/store/diagnostics location. But nothing worked in this case.

Solution:

Check for the permission of "pending" folder under $DOMAIN_HOME. If it is under root move it back to oracle (or the original WLS S/W owner).

Also check whether we have any files under pending. If so we can rename them too

chown -R oracle:oinstall pending

This did the trick. After this I was able to start up the AdminServer.