Copy permissions

Somethimes – especially on RACs – it is required to synchronize permissions on an ORACLE_HOME  running on one node with the permissions running on the other node.


Today i had the problem that after a patch all permissions were garbled. In order to restore service i had to fix them. The first idea here would be to copy the permissions from a working node and here is how you do it easy:

find /data/oragrid/product/ -printf 'chown %U.%G %p\n' > /tmp/
find /data/oragrid/product/ -type d -printf "chmod %m %p \n" > /tmp/
find /data/oragrid/product/ -type f -printf "chmod %m %p \n" > /tmp/

The scripts output for the file permissions looks like this:

chmod 775 /data/oragrid/product/
chmod 755 /data/oragrid/product/
chmod 755 /data/oragrid/product/
chmod 755 /data/oragrid/product/
chmod 755 /data/oragrid/product/

and for the ownership:

chown 0.30275 /data/oragrid/product/
chown 20341.30275 /data/oragrid/product/
chown 20341.30275 /data/oragrid/product/
chown 20341.30275 /data/oragrid/product/
chown 20341.30275 /data/oragrid/product/

Just execute the scipts in any order and you are done.

Recompile database objects (utlrp) fails with: ORA-04045: errors during recompilation/revalidation of SYS.DBMS_REGISTRY_SYS

Hi just discovered another flaw when having the workaround for CVE-2012-3132 in place. If you try to compile invalid database objects it will fail with:

SQL> @$ORACLE_HOME/rdbms/admin/utlrp
SELECT dbms_registry_sys.time_stamp('utlrp_bgn') as timestamp from dual
ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of SYS.DBMS_REGISTRY_SYS
ORA-04067: not executed, package body "SYS.NAME_SECURITY" does not exist


The solution for this is:

Either disable the trigger (alter system disable trigger sys.NAMECHECK_BEFORE_DDL_DB_TRG) OR disable the execution of all triggers on system level (ALTER SYSTEM SET "_system_trig_enabled" = FALSE;)

Quick & Dirty: AWR Query for Tablespace groth

The query:

  rtime as time,
  round(tablespace_size*tsb.block_size/1024/1024,0) SIZE_MB,
  round(tablespace_maxsize*tsb.block_size/1024/1024,0) MAXSIZE_MB,
  round(tablespace_usedsize *tsb.block_size/1024/1024,0) USEDSIZE_MB
  dba_hist_tbspc_space_usage tsu,
  v$tablespace ts,
  dba_hist_snapshot sn,
  dba_tablespaces tsb
  and sn.snap_id= tsu.snap_id
order by
sn.snap_id desc, tablespace_name;


Sample output;

07/01/2013 11:00:30    644    SYSAUX    64000     65536        2154
07/01/2013 11:00:30    644    SYSTEM    6144      8192         2884
07/01/2013 11:00:30    644    TOOLS     100       65536        1
07/01/2013 11:00:30    644    UNDOTBS1  65535     65535        54
07/01/2013 11:00:30    644    USERS     8         8            7
07/01/2013 10:00:28    643    SYSAUX    64000     65536        2154
07/01/2013 10:00:28    643    SYSTEM    6144      8192         2884
07/01/2013 10:00:28    643    TOOLS     100       65536        1
07/01/2013 10:00:28    643    UNDOTBS1  65535     65535        58
07/01/2013 10:00:28    643    USERS     8         8            7

Oracle 12c Release 1: New Features in a Nutshell

Oracle 12c Release 1 has been released!


Yes, the new generation of Oracles new database platform has been released. You can download it from Oracle


Now as 12c Release 1 is release i am freed from the NDA and allowed to share my information on the most recent Oracle Database Version.

The following slides are from a presentation i held at the 2013 ‘Frankfurter Datenbanktage’ in Germany. I put together a summary of all new Features using all the available sources from Oracle.

You can download the slides here.

How to solve ‘ORA-00600: internal error code, arguments: [18062]’

A few months back i accidently set my global database name to “null”. As a result the database crashed with “ORA-00600: internal error code, arguments: [18062]”:

SQL> update global_name set global_name=null;
SQL> commit;


Starting background process QMNC
Fri Oct 21 21:38:12 2011
QMNC started with pid=68, OS id=26764
Errors in file /data/oracle/ORADB/admin/diag/rdbms/ORADB/ORADB2/trace/ORADB2_ora_22638.trc  (incident=26069):
ORA-00600: internal error code, arguments: [18062], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /data/oracle/ORADB/admin/diag/rdbms/ORADB/ORADB2/incident/incdir_26069/ORADB2_ora_22638_i26069.trc
Fri Oct 21 21:38:14 2011
db_recovery_file_dest_size of 20000 MB is 71.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /data/oracle/ORADB/admin/diag/rdbms/ORADB/ORADB2/trace/ORADB2_ora_22638.trc:
ORA-00600: internal error code, arguments: [18062], [], [], [], [], [], [], [], [], [], [], []
Errors in file /data/oracle/ORADB/admin/diag/rdbms/ORADB/ORADB2/trace/ORADB2_ora_22638.trc:
ORA-00600: internal error code, arguments: [18062], [], [], [], [], [], [], [], [], [], [], []
Error 600 happened during db open, shutting down database
USER (ospid: 22638): terminating the instance due to error 600
Instance terminated by USER, pid = 22638
ORA-1092 signalled during: alter database open...
opiodr aborting process unknown ospid (22638) as a result of ORA-1092
Fri Oct 21 21:38:17 2011
ORA-1092 : opitsk aborting process


According to the Metalink articles your database is toast – you have to restore it from a backup. But what if there is no backup and you are not that familiar with BBED (the oracle block editor)?

Here is how to solve the issue:

Continue reading How to solve ‘ORA-00600: internal error code, arguments: [18062]’

/SYS/NEM0 or /SYS/NEM1 FAULTED after upgrade to Exadata version

During an Exadata upgrade at a customer the NEM module suddenly faulted after completing the compute node server upgrade.

Checking the Hardware and Firmware compatibility yielded a message similar to:

Firmware not supported. Required Version: NEM

To chec the current firmware of the NEM module do the following:

  1. Connect to the ILOM via SSH.
  2. cd /sys/nem0   (or nem1)
  3. ls

The ‘ls” has the following output:

 type = Network Express Module
 ipmi_name = NEM0
 fru_manufacturer = FOXCONN
 fru_version = FW    <=========== check the firmware version
 fru_part_number = 511-1056-05
 fru_serial_number = 0226LHF-1031AC00N7
 fault_state = OK
 load_uri = (none)
 clear_fault_action = (none)

If LOWER than then load the updated firmware rom from cell image file:

  1. Download the cell image file
  2. Make db_patch_11. file available from a URL (for example
  3. Ensure that the grid infrastructure is shut down.
  4. Use SSH to connect to the ILOM as the root user.
  5. For each NEMx where x is 0 and 1, do the following:
  6. cd /SYS/NEMx
    load -source
  7. Power cycle the database server.

Oracle Enterprise Linux 5.8: Multipathed disk partitions are not discovered properly

I just came accross a situation where multipathed devices were not discovered correctly. The device itself was discovered but the disks partitions were missing.

It turned out there are two bugs in OEL 5.8. The first one is related to multipath itself and can be fixed like this:

ls -l /sbin/mu*
 -rwxr-xr-x 1 root root     0 Jun 20 11:01 /sbin/multipath
 -rwxr-xr-x 1 root root 75464 Jul  9  2011 /sbin/multipathd
ln -s /sbin/multipath /sbin/multipath.static
ls -l /sbin/mu*
 -rwxr-xr-x 1 root root     0 Jun 20 11:01 /sbin/multipath
 -rwxr-xr-x 1 root root 75464 Jul  9  2011 /sbin/multipathd
 lrwxrwxrwx 1 root root    15 Jun 20 11:02 /sbin/multipath.static -> /sbin/multipath

The second issue arises with a huge number of LUNs (usually > 100). Here the HAL daemon sometimes has not enough time to detect all the disks. As a workaround create the following file with the following contents:

 cat /etc/sysconfig/haldaemon

Once done reboot the server and you are done.

ORA-07217: sltln: environment variable cannot be evaluated when backing up with RMAN

When setting up a RMAN backup to tape i encountered the following issue on a Oracle 9.2 database (not sure if it applies to older database versions; i´d say no…):

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
ORA-07217: sltln: environment variable cannot be evaluated.

It turned out the script used to start the rman backup was the problem:

connect target /
connect catalog rman/rman@catalog
run {
backup database;

After changing the target connection line to include the username, password and TNS connect string the backup ran fine:

connect target sys/sys@database
connect catalog rman/rman@catalog
run {
backup database;



Exadata: How to rename grid disks


Just a short how to rename grid disks (and keep the size). In the example below the grid disks name contained a typo ‘FALSH02’ and had to be renamed to ‘FLASH02’:

dcli -c exadata01cel01 -l root "cellcli -e drop griddisk ALL prefix=FALSH02 force"
dcli -c exadata01cel01 -l root "cellcli -e creategriddisk ALL HARDDISK prefix=FLASH02"

Sample output:

[root@exadata01db01 ~]# dcli -c exadata01cel01 -l root "cellcli
-e drop griddisk ALL prefix=FALSH02 force"

exadata01cel01: GridDisk FALSH02_CD_00_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_01_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_02_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_03_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_04_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_05_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_06_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_07_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_08_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_09_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_10_exadata01cel01 successfully dropped
exadata01cel01: GridDisk FALSH02_CD_11_exadata01cel01 successfully dropped

[root@exadata01db01 ~]# dcli -c exadata01cel01 -l root "cellcli
-e create griddisk ALL HARDDISK prefix=FLASH02"

exadata01cel01: GridDisk FLASH02_CD_00_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_01_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_02_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_03_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_04_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_05_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_06_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_07_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_08_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_09_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_10_exadata01cel01 successfully created
exadata01cel01: GridDisk FLASH02_CD_11_exadata01cel01 successfully created"