Archive

Archive for September, 2009

Oracle Dataguard – automatic archive log file management on primary site

September 23rd, 2009 No comments

System Environment

Suppose you have an oracle data guard environment with one primary database and one physical standby database running.

You perform online backups with RMAN from the PRIMARY database. Your users complain about slowness during the night. You notice the slowness correlates with the time the online backup runs. Thanks to the physical standby database you can offload your backups to the standby database. This short guide shows how to do that.

First of all the ability to offload backups from primary to standby is a major plus in physical standby configurations. Note that this is not possible on logical standby configurations.

Furthermore the offload to the standby database requires to have a more or less separated storage (either separated storage boxes, separated disks, separated ports,….).General speaking: the more you separate primary from standby storage the better it is from performance point of view.

Implementation on primary site

Database requirements

In order to use the automatic deletion of archive logs on the primary site after they have been applied on the standby database you need to enable the FRA (flashback recovery area) on the primary database set setting the following two parameters:

db_recovery_file_dest='<path>’
db_recovery_file_dest_size=xyG

The value for db_recovery_file_dest must be a valid path and db_recovery_file_dest_size specified the total size of the FRA in GB (specified by the suffix “G”), MB (suffix “M”) and so on.

Note that the FRA will fill up to this value before starting to delete archive logs applied on standby. So if you configure the FRA to a size of 200 GB you will end up with 200 GB archive logs stored in the FRA.

You also need to unset the parameter archive_log_dest_n if it points to a local directory. Note that dataguard uses archive_log_dest_n parameter for specifying the remote archive log destination.

Once you set everything you can check it with “archive log list” from SQLPlus. The output should look like this:

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     62686
Next log sequence to archive   62695
Current log sequence           62695

If you have configured the environment correctly the archive destination should show up as “USE_DB_RECOVERY_FILE_DEST”.

Backup on primary site

On primary site all you need to backup is the current control file. For use with EMC networker i am using the following script. If you backup with another backup software you just need to alter the SBT_TAPE configuration accordingly to the documentation or backup the control file to disk rather than to tape.

connect target sys/password@primary;
connect catalog rman_catalog/catalog_password@catalogd;
run {
 backup
 (current controlfile format 'ctrl_PRIMARY_s%s_p%p');
 sql 'alter database backup controlfile to trace';
 }
run {
copy current controlfile to '/u01/sicherung/controlfiles/ctl_PRIMARY.ctl';
}

The script shown above is just an example. You need to check and alter it to suit your needs. As you may assume the database in this example is named “PRIMARY”; the standby is named “STANDBY”.

You can see i am quite paranoid about control file backups: I backup them to tape, to trace in a readable format and directly to disk as a copy.

In restore scenarios this enables:

  • restore of the control file with rman
  • re-creatation of the control file from the dumped control file
  • copy the control file from  /u01/sicherung/controlfiles/ctl_PRIMARY.ctl to the destinations named in the database parameter “control_files”
    (Note: The file created with “copy controlfile” is not wrapped in an rman stream… therefore you can just “copy” it with cp for instance)

Implementation on standby site

Required INIT.ORA parameters

If you use dataguard in a Max Performance or Max Availability mode you need to set

_log_deletion_policy='ALL'

in your standby database init.ora. This is a know problem documented in Metalink Note 331924.1.

Required RMAN configuration

In order to enable automatic deletion on primary site you need to configure rman on standby site as follows:

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY

This tells oracle to make archive logs on primary size egible for deleteion after they have been applied on the standby site.

Backup scripts

Full backup

The following script shows how to perform an full backup with rman on the STANDBY database. Primary and standby databases share the same recovery catalog (because they are basically the same databases)!

My recommendation is to daily backup your database fully. With large databases a full backup might take longer than 24 hours. In this cases you can do daily incremental and weekly full backups.

connect target sys/password@standby;
connect catalog rman_catalog/catalog_password@catalogd;
run {
 CONFIGURE DEVICE TYPE SBT_TAPE PARALLELISM 4;
 backup check logical INCREMENTAL LEVEL 0 filesperset 8
 format 'FULL0_%d_%u'
 (database include current controlfile);
 backup not backed up 1 times filesperset 10
 (archivelog all delete input format  'al_STANDBY_%s_%p');
 backup
 (current controlfile format 'ctrl_STANDBY_s%s_p%p'); }
run {
copy current controlfile to '/u01/sicherung/controlfiles/ctl_STANDBY.ctl';
}

Archive log only backup

The following script shows how to backup the archive logs on STANDBY site. The script can be started any time.

I recommend to backup the archive logs every hour. Thanks to “backup not backed up 1 times” we only backup the archive logs once even if they are still on disk when we backup the archive logs once again.

connect target sys/password@standby;
connect catalog rman_catalog/catalog_password@catalogd;
run {
 CONFIGURE DEVICE TYPE SBT_TAPE PARALLELISM 4;
 backup not backed up 1 times as compressed backupset filesperset 10
 (archivelog all delete input format  'al_STANDBY_%s_%p');
 backup
 (current controlfile format 'ctrl_STANDBY_s%s_p%p');
 }
run {
copy current controlfile to '/u01/sicherung/controlfiles/ctl_STANDBY.ctl';
}

You also do not need to worry about RMAN deleting archive logs which are not yet applied on standby (in MAXIMUM PERFORMANCE mode). If rman tries to delete an archive log which is not yet applied on standby you see the following message:

archive log filename=/u02/oradata/STANDBY/arch/1_62487_646478938.dbf
  thread=1 sequence=62487
RMAN-08120: WARNING: archive log not deleted, not yet applied by standby

Test it

After you configured everything you need to TEST you configuration. This involves:

  • What happens to the primary database if the standby database is down and does not apply archive logs?
  • Does the archive logs expire once they are applied on the standby?
  • and so ….

Query the FRA

The following query queries the current status of the FRA and shows how much space is occupied in the FRA:

select * from v$flash_recovery_area_usage;

fra

In the screenshot above you can see the FRA is filled to 99.83 % with 58.29% reclaimable (this means: 58.29% of space occupied by archive logs in the FRA are eligible for deletetion because they are already applied on the standby database). This query was performed on a data guard enviroment running in MAXIMUM AVAILABLILITY mode.

Categories: Oracle in general Tags:

Oracle 11 Release 2 Install Guide – ACFS and ADVM

September 23rd, 2009 8 comments

ACFS and ADVM

System configuration

We will use the system configured and installed in part 2

General information

  • ADVM = ASM dynamic volume manager
  • ACFS = ASM cluster file system
  • Basic layout

step3_010

(Source: Oracle Database Storage Administrator’s Guide 11g Release 2)

ADVM – Advantages

  • Integrated in ASM; this can be an disadvantage as well :-)
  • Inhertits storage from ASM hence enables host-based mirroring (either 2- or 3-way-mirroring)
  • multiple volumes within a disk group can be created with an file system such as ext3, ext4, reiserfs, … on top of it and will support storage of any file type as the file system normally woud – EXCEPT files which can be place in ASM directly
  • ADVM volume dynamically resizeable

ADVM – Disadvantages

  • ADVM volumes may be resized online; but the used file system must support it as well (ext3 on OEL 5 does support online resizing but does not support online shrinking)
  • Storing files which can be stored in ASM directly in ADVM + file system is not supported
  • NFS on top of ADVM is also not supported
  • ASM configuration assistant (asmca) only supports creation of volumes / file system… delete a volume / file system requires command line

ACFS – Advantages

  • cluster file system on top of ASM and ADVM
  • as available as ASM is (inherits storage from ASM disk group and ADVM volume)
  • Supports storage of files which cannot be directly stored in ASM, i.e.
    • executables
    • trace files
    • log files
    • Supports even oracle database binary installations
  • On ACFS read-only Snapshots can be created
  • dynamically resizeableUseable accross plattforms
  • Thoughts
    • Do i need licenses for grid infrastructrue?
    • If not: What if grid infrastructure + ASM used to provide host-based mirroring and cluster file system for non-oracle applications, for instance web servers ACFS Mount registry: used for mouting ACFS and ADVM file system across reboots

ACFS – Disadvantages

  • for example storing database files in ACFS is not supported, according to the documentation
    „Oracle Support Services will not take calls and development will not fix bugs associated with storing unsupported file types in Oracle ACFS“
  • Only available with RedHat Server 5 or Oracle Enterprise Linux 5 !
  • Disk group compatible parameter COPATBILE.ASM and COMPATIBLE.ADVM must be set so 11.2
  • ASM configuration assistant (asmca) only supports creation of volumes / file system… delete a volume / file system requires command line

First steps with ADVM and ACFS

Create a disk group for use with ADVM and ACFS

Lets first create an additional disk group called „DATA2“ which consists for two iSCSI LUNs with 30 GB each

Preparation:

  • LUNs visible with „fdisk -l“
  • Partition created (one on each LUN)
  • disk labeled with „oracleasm createdisk <name> <devpath>“
  • Create disk group in ASM (remember to connect as „sys as sysASM“!)

step3_011

  • Notes on disk groups
    • AUSIZE of 4 MB recommended by Oracle documentation due to:
      • Increased I/O through the I/O subsystem if the I/O size is increased to the AU size.
      • Reduced SGA size to manage the extent maps in the database instance.
      • Faster datafile initialization if the I/O size is increased to the AU size.
      • Increased file size limits.
      • Reduced database open time.
    • Large AUSIZE requires as well
      • Increasing size of maximum IO request to at least 4 MB in operating system, drive, HBA, storage system
      • Larger stripe size in storage system (pre 11g R2: 1 MB stripe size, with 11g R2: 4 MB? → to be tested)

Read the documentation on COMPATIBLE parameters;most „cool“ features are only available with 11.2 COMPATIBLE parameter hence require 11g R2 database

Creating an ADVM and afterwards an ACFS

Create an ADVM

  • ACFS requires ADVM in which ACFS can be created
  • volcreate creates ADVM volume

step3_012

  • The command above shows minimal command creating an ADVM volume; redundancy is derived from disk group, our data group was created with „normal“ redundancy so the volume inherits this as well)
  • Creation with SQL also possible: „ALTER DISKGROUP data2 ADD VOLUME volume1 SIZE 10G;“

Create ACFS on top of ADVM

  • Requires ADVM in which ACFS can be created
  • volinfo shows detailed information
  • Especially device path is important for creating the file system

step3_013

  • create ACFS from operating system (only on one node)

step3_014

  • register acfs in registry to be mounted across reboots with „acfsutil“
  • ATTENTION: DO NOT register shared oracle home directories with acfsutil; this will be done later by the clusterware itself!
  • test of everything works by issueing „mount.acfs -o all“ on all nodes; the file system should be mounted and accessible

step3_015

Simple Performance tests

dd if=/dev/zero bs=1024k of=<path> count=1000 oflag=direct

→ direct I/O used; no caching, performed 10 times

  • write to ACFS
    • ACFS 2-way mirror: ~ 6 MB/s average
    • ACFS unprotected: ~ 12 MB/s averga
    • → expected… one disk, double I/O halfed throughput
  • direct write to iSCSI LUN: ~ 14.3 MB/s average

Attention: Tests were performed within a VMWare… so results are most likely not accurate… but we get an impression.. we will check this on real hardware later!

Categories: Oracle 11g Release 2, Oracle ASM Tags:

Calculate IOPS (or IO/s) for hard disks

September 23rd, 2009 No comments

From time to time i spend time looking for information how to calculate IOPS (or IO operations per second) for hard drives.

In the following table i outlined how to calculate the IOPS a disk can handle.  For this calculation you need at least TWO values. Most of the time you will be given with the RPMs (rotations per minute) and average seek time. You can also cope with rotation latency or IO time – you just need to play with the formulas.

 

 iops

 

Based on this approach you can easily calculate the data transfer rate (MBPS):

 

 iops2

As you can easily see larger IO requests yield to a higher throughput in MBPS. Due to this fact oracle recommends the SAME principle (SAME = stripe and mirror everything) with a stripe size of 1 MB.

Categories: Uncategorized Tags:

Changing ADR Destination for 11g Listener

September 22nd, 2009 No comments

With 11g Release 1 Oracle introduced the Automatic Diagnostic Repository  – “ADR”. The ADR is designed as central repository for storing all oracle-related log files such as alert.log, listener.log and so on.

The log files are stored in XML format and in a subdirectory called “trace” in the pre-11g format.

By default the ADR-Home is located in $ORACLE_BASE. The path can be changed in the database by changing the parameter DIAGNOSTIC_DEST.

 

For the listener this change does not work. If you want to change the location the listener places its log files add the following parameter to the LISTENER.ORA file to change the path to the ADR:

ADR_BASE_listener = <path>

“listener” is the name of the listener to be changed. If your listener is named differently the parameter must be changed as well!

Categories: Oracle in general Tags:

Building and using the kfed utility

September 21st, 2009 No comments

When using ASM sometimes it it extremely helpful to get more information on the asm disk header. The “kfed” utility from oracle enables to dump the asm disk header and many more.

With this tool corruptions to the asm can be easily checked (and repaired).

 

 

Building kfed

This method works from 10g onwards to and including 11g R2:

 

-bash-3.2$ cd $ORACLE_HOME/rdbms/lib
-bash-3.2$ make -f ins_rdbms.mk ikfed
Linking KFED utility (kfed)
rm -f /u01/app/oracle/product/ora11r2p/rdbms/lib/kfed
gcc -o /u01/app/oracle/product/ora11r2p/rdbms/lib/kfed -m64 -L/u01/app/oracle/product/ora11r2p/rdbms/lib/
-L/u01/app/oracle/product/ora11r2p/lib/ -L/u01/app/oracle/product/ora11r2p/lib/stubs/ 
/u01/app/oracle/product/ora11r2p/lib/s0main.o /u01/app/oracle/product/ora11r2p/rdbms/lib/sskfeded.o
/u01/app/oracle/product/ora11r2p/rdbms/lib/skfedpt.o -ldbtools11 -lasmclnt11 -lcommon11 -lcell11 -lskgxp11
-lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11 -locr11 -locrb11 -locrutl11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11
-lxml11 -lasmclnt11 -lcommon11 -lcell11 -lskgxp11 -lgeneric11  -lcommon11 -lgeneric11  -lclntsh 
`cat /u01/app/oracle/product/ora11r2p/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11
-lnro11 `cat /u01/app/oracle/product/ora11r2p/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11
-lnnz11 -lzt11 -lztkg11 -lztkg11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11 -lmm -lsnls11
-lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11
-lcore11 -lnls11 `cat /u01/app/oracle/product/ora11r2p/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11
-lnl11 -lnro11 `cat /u01/app/oracle/product/ora11r2p/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11
-lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11   -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11
-lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11 -lnnetd11  -lvsn11
-lcommon11 -lgeneric11 -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11
-lunls11 -lsnls11 -lnls11 -lcore11 -lnls11   `cat /u01/app/oracle/product/ora11r2p/lib/sysliblist`
-Wl,-rpath,/u01/app/oracle/product/ora11r2p/lib -lm    `cat /u01/app/oracle/product/ora11r2p/lib/sysliblist`
-ldl -lm   -L/u01/app/oracle/product/ora11r2p/lib
test ! -f /u01/app/oracle/product/ora11r2p/bin/kfed ||\
           mv -f /u01/app/oracle/product/ora11r2p/bin/kfed /u01/app/oracle/product/ora11r2p/bin/kfedO
mv /u01/app/oracle/product/ora11r2p/rdbms/lib/kfed /u01/app/oracle/product/ora11r2p/bin/kfed
chmod 751 /u01/app/oracle/product/ora11r2p/bin/kfed

 

Using kfed to dump disk header

In the following example we use kfed to dump the asm disk header of an asm disk represented by device “/dev/sdg1”:

 

/u01/app/oracle/product/11.2.0/ora11p/bin/kfed read /dev/sdg1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: T=0 NUMB=0x0
kfbh.block.obj:              2147483648 ; 0x008: TYPE=0x8 NUMB=0x0
kfbh.check:                  2733723458 ; 0x00c: 0xa2f14f42
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr: ORCLDISKDISK003A ; 0x000: length=16
kfdhdb.driver.reserved[0]:   1263749444 ; 0x008: 0x4b534944
kfdhdb.driver.reserved[1]:   1093873712 ; 0x00c: 0x41333030
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                DISK003A ; 0x028: length=8
kfdhdb.grpname:                   DATA2 ; 0x048: length=5
kfdhdb.fgname:              CONTROLLER1 ; 0x068: length=11
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             32925079 ; 0x0a8: HOUR=0x17 DAYS=0xc MNTH=0x9 YEAR=0x7d9
kfdhdb.crestmp.lo:           2888263680 ; 0x0ac: USEC=0x0 MSEC=0x1da SECS=0x2 MINS=0x2b
kfdhdb.mntstmp.hi:             32925206 ; 0x0b0: HOUR=0x16 DAYS=0x10 MNTH=0x9 YEAR=0x7d9
kfdhdb.mntstmp.lo:           1862809600 ; 0x0b4: USEC=0x0 MSEC=0x20e SECS=0x30 MINS=0x1b
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000
kfdhdb.mfact:                    454272 ; 0x0c0: 0x0006ee80
kfdhdb.dsksize:                    5119 ; 0x0c4: 0x000013ff
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:              186646528 ; 0x0e0: 0x0b200000
kfdhdb.grpstmp.hi:             32925079 ; 0x0e4: HOUR=0x17 DAYS=0xc MNTH=0x9 YEAR=0x7d9
kfdhdb.grpstmp.lo:           2887388160 ; 0x0e8: USEC=0x0 MSEC=0x283 SECS=0x1 MINS=0x2b
kfdhdb.vfstart:                       0 ; 0x0ec: 0x00000000
kfdhdb.vfend:                         0 ; 0x0f0: 0x00000000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000
kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0x00000000
kfdhdb.ub4spare[1]:                   0 ; 0x100: 0x00000000
kfdhdb.ub4spare[2]:                   0 ; 0x104: 0x00000000
kfdhdb.ub4spare[3]:                   0 ; 0x108: 0x00000000
kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0x00000000
kfdhdb.ub4spare[5]:                   0 ; 0x110: 0x00000000
kfdhdb.ub4spare[6]:                   0 ; 0x114: 0x00000000
kfdhdb.ub4spare[7]:                   0 ; 0x118: 0x00000000
kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0x00000000
kfdhdb.ub4spare[9]:                   0 ; 0x120: 0x00000000
kfdhdb.ub4spare[10]:                  0 ; 0x124: 0x00000000
kfdhdb.ub4spare[11]:                  0 ; 0x128: 0x00000000
kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0x00000000
kfdhdb.ub4spare[13]:                  0 ; 0x130: 0x00000000
kfdhdb.ub4spare[14]:                  0 ; 0x134: 0x00000000
kfdhdb.ub4spare[15]:                  0 ; 0x138: 0x00000000
kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0x00000000
kfdhdb.ub4spare[17]:                  0 ; 0x140: 0x00000000
kfdhdb.ub4spare[18]:                  0 ; 0x144: 0x00000000
kfdhdb.ub4spare[19]:                  0 ; 0x148: 0x00000000
kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0x00000000
kfdhdb.ub4spare[21]:                  0 ; 0x150: 0x00000000
kfdhdb.ub4spare[22]:                  0 ; 0x154: 0x00000000
kfdhdb.ub4spare[23]:                  0 ; 0x158: 0x00000000
kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0x00000000
kfdhdb.ub4spare[25]:                  0 ; 0x160: 0x00000000
kfdhdb.ub4spare[26]:                  0 ; 0x164: 0x00000000
kfdhdb.ub4spare[27]:                  0 ; 0x168: 0x00000000
kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0x00000000
kfdhdb.ub4spare[29]:                  0 ; 0x170: 0x00000000
kfdhdb.ub4spare[30]:                  0 ; 0x174: 0x00000000
kfdhdb.ub4spare[31]:                  0 ; 0x178: 0x00000000
kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0x00000000
kfdhdb.ub4spare[33]:                  0 ; 0x180: 0x00000000
kfdhdb.ub4spare[34]:                  0 ; 0x184: 0x00000000
kfdhdb.ub4spare[35]:                  0 ; 0x188: 0x00000000
kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0x00000000
kfdhdb.ub4spare[37]:                  0 ; 0x190: 0x00000000
kfdhdb.ub4spare[38]:                  0 ; 0x194: 0x00000000
kfdhdb.ub4spare[39]:                  0 ; 0x198: 0x00000000
kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0x00000000
kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0x00000000
kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0x00000000
kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0x00000000
kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0x00000000
kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0x00000000
kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0x00000000
kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0x00000000
kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0x00000000
kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0x00000000
kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0x00000000
kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0x00000000
kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0x00000000
kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0x00000000
kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000
kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000
kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000
kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000

 

What does it all mean?

For most usages the following rows are most important:

 

Status of disk:

kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER

Name of the disk:

kfdhdb.dskname:                DISK003A ; 0x028: length=8

 Name of disk group the disk belongs to:

kfdhdb.grpname:                   DATA2 ; 0x048: length=5

 Name of failure group the disk belongs to

kfdhdb.fgname:              CONTROLLER1 ; 0x068: length=11

Sector size of disk:

kfdhdb.secsize:                     512 ; 0x0b8: 0x0200

Blocksize of disk:

kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000

Provision string for use with asm:

kfdhdb.driver.provstr: ORCLDISKDISK003A

–> which finally means: “ORCL:DISK003A”

AU size:

kfdhdb.ausize:                  4194304

 

In a followup i will dig all little bit deeper in ASM and provide samples of different headers in different combinations (external / normal / high redundancy, ….).

Categories: Oracle ASM, Oracle in general Tags:

Using ASM with files (either locally or via NFS)

September 21st, 2009 3 comments

From time to time i need to configure additional and temporary storage for data migration or just testing purposes. Raw Devices are not recommended to use from 11g onwards.

So why not use the loopback device for that? With this scenario you can even place these disks on NFS and use it across nodes in a cluster:

 

1. Create the file

dd if=/dev/zero bs=1024k count=1000 of=/disk1

 

2. Setup loopback driver

losetup /dev/loop1 /disk1

 

3. Label device

oracleasm createdisk LOOP1 /dev/loop1

 

4. Use it with ASM

Categories: Oracle ASM, Oracle in general Tags:

Oracle 11g Release 2 Install Guide – Grid Infrastructure installation

September 14th, 2009 87 comments

System configuration:

Hardware configuration

  • two virtual machines (VMWARE)
  • 1 vCPU
  • 2 GB RAM –> bare minimum possible
  • 40 GB Disk
  • Storage exported via ISCSI
    • 4 LUNs with 2 GB each
    • 2 LUNs with 30 GB each

Operating system configuration

  • Oracle Enterprise Linux 5.3 x86_64 (Kernel 2.6.18-128.el5)
  • Installed packages: default system + development packages

Grid Infrastructure configuration:

  • Cluster Name: “RAC”
  • Binary installation on local disk
  • OCR, Voting and datafiles stored in ASM

Steps to install Oracle 11g Release 2 Grid Infrastructure

  • Configure Linux and pre-requirements
  • Configure Storage
  • Binary installation of grid infrastructure
  • Installation of Oracle 11g Release 2 Database (either single or rac installation)

Configure Linux and pre-requirements

SWAP

  • Between 1 and 2 GB RAM –>SWAP 1.5 times the size of RAM
  • Between 2 and 16 GB RAM –> equal to size of RA
  • > 16 GB RAM –> 16 GB SWAP

Memory

  • according to grid infrastructure documentation “>= 1 GB Memory”
  • bare minumum from authors experience:
    • 1 GB for grid infrastructure components
    • 500 MB for operating system
    • 1 GB for cluster database SGA/PGA/UGA
    • = 2,5 GB bare minimum!

See below for memory consumption with grid infrastructure installed: > 800 MB for infrastructure processes

step2_010

Automatic Memory Management

Required /dev/shm with appropriate size (i.e. SGA of 16 GB required /dev/shm to be 16 GB+)

Huge Pages and autom. Memory Management are INCOMPATIBLE

Checking required packages

(see required packages for single database installation; this applies here as well cause we will end up install a database in the end)

According to the documentation the following packages are needed:

  • binutils-2.17.50.0.6
  • compat-libstdc++-33-3.2.3
  • compat-libstdc++-33-3.2.3 (32 bit)
  • elfutils-libelf-0.125
  • elfutils-libelf-devel-0.125
  • gcc-4.1.2, gcc-c++-4.1.2
  • glibc-2.5-24, glibc-2.5-24 (32 bit)
  • glibc-common-2.5
  • glibc-devel-2.5
  • glibc-devel-2.5 (32 bit)
  • glibc-headers-2.5
  • ksh-20060214
  • libaio-0.3.106
  • libaio-0.3.106 (32 bit)
  • libaio-devel-0.3.106
  • libaio-devel-0.3.106 (32 bit)
  • libgcc-4.1.2, libgcc-4.1.2 (32 bit)
  • libstdc++-4.1.2
  • libstdc++-4.1.2 (32 bit)
  • libstdc++-devel 4.1.2
  • make-3.81
  • sysstat-7.0.2
  • unixODBC-2.2.11
  • unixODBC-2.2.11 (32 bit)
  • unixODBC-devel-2.2.11
  • unixODBC-devel-2.2.11 (32 bit)

On sample system with OEL 5.3 and default + development packages installed only the following rpms were missing:

rpm -ihv libaio-devel-0.3.106-3.2.* libstdc++43-devel-4.3.2-7.el5.* sysstat-7.0.2-3.el5.x86_64.rpm unixODBC-2.2.11-7.1.* unixODBC-devel-2.2.11-7.1.*

Shell Limits

/etc/security/limits.conf

grid                  soft     nproc    16384
grid                  hard    nproc    16384
grid                  soft     nofile   65536
grid                  hard    nofile   65536
grid                  soft     stack      10240
grid                 hard    stack      10240

In /etc/pam.d/login add if not exists

session    required     pam_limits.so

Kernel Limits (MINIMUM values) in /etc/sysctl.conf

kernel.sem=250 32000 100 128
kernel.shmall=2097152
kernel.shmmax=536870912
kernel.shmmni=4096
fs.file-max=6815744
fs.aio-max-nr=1048576
net.ipv4.ip_local_port_range=9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576

– SuSE only –
vm.hugetlb_shm_group=<gid of osdba group>

The values in /etc/sysctl.conf should be tuned (i.e. according to the number of instance, available memory, number of connections,…)

Kernel Limits on Linux (Calculate them)

kernel.sem

semmns = Total number of semaphores systemwide =

2 * sum (process parameters of all database instances on the system)
+ overhead for background processes
+ system and other application requirements

semmsl = total semaphoren for each set

semmni = total semaphore sets =  semmns divided by semmsl, rounded UP to nearest multiple to 1024

kernel.sem = <semmsl semmns semopm semmni>

semmsl    =  set to 256
semmns    = set total number of semaphoren (see above!)
semopm    = 100; in documentation not explicitly described
semmni    = see calculcation above

kernel.shmall

kernel.shmall = This parameter sets the total amount of shared memory pages that  can be used system wide. Hence, SHMALL should always be at least   ceil(shmmax/PAGE_SIZE). PAGE_SIZE is usually 4096 bytes unless you use Big Pages or Huge  Pages which supports the configuration of larger memory pages.  (quoted from: www.puschitz.com/TuningLinuxForOracle.shtml)

kernel.shmmax

kernel.shmmax = the maximum size of a single shared memory segment in bytes that
a linux process can allocate
If not set properly database startup can fail with:

ORA-27123: unable to attach to shared memory segment

kernel.shmmni

kernel.shmmni = system wide number of shared memory segments; Oracle recommendation for 11g Release 1 “at least to 4096”; i did not found anything for Release 2….

fs.file-max

fs.file-max = maximum number of open files system-wide; must be at least %G„%@6815744$(B!H(B

fs.aio-max-nr

fs.aio-max-nr = concurrent outstanding i/o requests; must be set to %G„%@1048576$(B!H(B

net.ipv4.ip_local_port_range

net.ipv4.ip_local_port_range = mimimum and maximum ports for use; must be set to  minimal “9000” and “65500” as maximum

net.core.rmem_default

net.core.rmem_default = the default size in bytes of the receive buffer; must  be set at least to “262144”

net.core.rmem_max

net.core.rmem_max = the maximum size in bytes of the receive buffer; must be set at least to “@4194304”

net.core.wmem_default

net.core.wmem_default = the default size in bytes of the send buffer; must be set at least to “262144”

net.core.wmem_max

net.core.wmem_max = the maximum size in bytes of the send buffer; must be set at least to “1048576”

Networking

Basic facts

  • Works completely different than 10g or 11g R1!
  • At least two separated networks (public and private) and therefore two network interfaces required
  • ATTENTION: Interface names must be equal on ALL nodes! (i.e. If private network interface on node A is eth2 the private network interface name on all other nodes must be eth2 as well…. )
  • Recommendation: Use bonding for:
    • Static naming (even if you use only one interface per bond)
    • Failover / Load Sharing
    • –> we will use network bonding with only one interface in the following
  • IP adresses can be given by two schemes:
    • GNS (grid naming service) –> automatic ip numbering
    • Manual Mode
    • –> we will use manual ip adressing mode in the following
  • GNS mode requires:
    • one fixed public IP for each node
    • one dhcp virtual IP for each node
    • one hdcp for fixed private IP for each node
    • three dhcp IP for the SCAN
    • Thougths by the author:
      • new
      • more complex
      • if working quite easy adding of an node; at least from the ip numbering point of view %G–%@ but how often do you add a node?
  • Manual Mode ip adressing requires:
    • one public IP for each node
    • one virtual IP for each node
    • one private IP for each node
    • one to three (recommended) IPs for providing the SCAN name

Naming schema used in the following (remember: 2-node-cluster)

step2_011

Configure Network Bonding

In /etc/modprobe.conf add line:

alias bond0 bonding
alias bond1 bonding
options bonding miimon=100 mode=1 max-bonds=2
(“mode=1” means active/passive failover… see “bonding.txt” in kernel sources for more options)
/etc/sysconfig/network-scripts/ifcfg-bond0 looks like:

DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.180.0
NETMASK=255.255.255.0
IPADDR=192.168.180.10
USERCTL=no
/etc/sysconfig/network-scripts/ifcfg-eth0 looks like:

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=yes

(Note: Add a second interface to achive real fault tolerance…. for our testing environment we use bonding to provide a consistent name schema)

The configuration for bond1 is not shown… just alter interface names and IPs.

Configure NTP

Grid Infrastructure provides ntp-like time synchronization with “ctss” (cluster time synchronization service) ctssd is provided in case connections to ntp servers are not possible

If no running (“chkconfig ntpd off” and configured “rm /etc/ntp.conf” ntpd is found ctssd will be used; if ntpd is found  ctssd will start in observer mode.

ATTENTION: Set the “-x” flag if you use ntp to prevent ntp from stepping the clock in /etc/sysconfig/ntpd!

Check if NTP is working

  • start “ntpq”
  • enter “opeer” to see list of all peers
    In our example two peers: host “nb-next-egner” and the local clock

step2_012

enter “as” to see associations
“sys.peer” means the clock is synchronized against this; the order in which the entries apper is like “opeer” – so first entry means host “nb-next-egner” – fine!
reject means not synchronized against due to various reasons
enter “rv” for detailed information

step2_013

SCAN

  • SCAN = Single Client Access Name; new concept in 11g R2
  • DNS-based
  • nameing notation: <name of cluster>-scan.<domain>
  • for our cluster named  “rac” with domain “regner.de” this is    rac-scan.regner.de
  • You need at least ONE – better three IPs for the new database access schema called SCAN
  • IPs are configured in DNS (forward and reverse lookup);
  • !! using local hosts file failed verification after grid installation !!
  • forward- and reverse lookup needs to be configured
  • excerpt from zone file:
    rac-scan          IN A          192.168.180.6
    rac-scan          IN A          192.168.180.7
    rac-scan          IN A          192.168.180.8

After installation we will find three listeners running from grid infrastructure home:

bash# srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac1
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node rac2
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node rac2

Connection to database “RAC11P” using SCAN would use this tnsnames entry:

RAC11P =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=rac-scan.regner.de)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=RAC11P))
)

The “old fashioned” way still works:

RAC11P_old =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=rac1-vip.regner.de)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=rac2-vip.regner.de)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=RAC11P))
)

Connecting to a named instance:

RAC11P =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=rac-scan.regner.de)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=RAC11P)
(INSTANCE_NAME=RAC11P1))
)

Check DNS for SCAN

Update [16th October 2009]: If you do not have a working DNS server available refer here to set up your own,

Forward lookup

Use “dig” to check: “dig rac-scan.regner.de”

step2_014

Reverse lookup

Use “dig -x” to check

dig -x 192.168.180.6
dig -x 192.168.180.7
dig -x 192.168.180.8

step2_015

Create User and Group

Create Group

groupadd -g 500 dba

Note: For educational purposes we use only one group. In productive enviroments there should be more groups to separate administrative duties.

Create User

mkdir -p /u01/app/11.2.0/grid
chown -R root:dba /u01
chmod -R 775 /u01
chown -R grid:dba /u01/app/11.2.0/grid
useradd -g dba -u 500 -d /u01/app/11.2.0/grid grid
passwd grid

Note: Oracle recommends different users for grid and database installation!

Make sure groupid and userid are the same on ALL nodes!

Create profile file (~/.bash_profile or ~/.profile on SuSE) for user “grid”

umask 022
if [ -t 0 ]; then
stty intr ^C
fi

Prepare and Configure Storage

  • Requirements
    • must be visible on all nodes
    • as always – recommendation: SAME (stripe and mirror everything)
  • What to store where:
    • OCR and Voting disk
      • ASM
      • NFS
      • RAW disks (deprecated; read doucmentation!)
    • Oracle Clusterware binaries
      • NFS
      • Local disk
    • Oracle RAC binaries
      • ACFS
      • NFS
      • local disk
    • Oracle database files
      • ASM
      • NFS
      • RAW disks
    • Oracle recovery files
      • ASM
      • NFS

Install RPMs

  • oracleasmsupport
  • oracleasmlib
  • oracleasm-<kernel-version>
    (see “Sources” for download locatio)

Configure ASM

/usr/sbin/oracleasm configure -i

step2_016

init ASM

/usr/sbin/oracleasm init

Create Partitions on disk with fdisk

In the following example disk /dev/sde (this is our iSCSI storage) does not contain a partition at all  – we will create one

step2_017

Create one whole disk partition on /dev/sde

step2_018

Label all disks with asm label

step2_019

Query disks on all nodes – Node “rac1”

step2_020

–> all disks visible with correct label

Query disks on all nodes – Node “rac2” (the other node)

–> also all four LUNs visible

step2_021

OCR and Voting disks

  • Will be placed in ASM (new in 11g R2)
  • three different redundancy levels:
    • External – 1 disk minimum needed
    • Normal – 3 disks minumum needed
    • High  – 5 disks minimum needed
  • Storage Requirments
    • External – 280 MB OCR + 280 MB Voting Disk
    • Normal – 560 MB OCR + 840 MB Voting Disk
    • High  – 840 MB OCR + 1,4 GB Voting Disk
    • plus Overhead for ASM Metadata

Overhead for ASM metadata

total =
[2 * ausize * disks]
+ [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

redundancy = Number of mirrors: external = 1, normal = 2, high = 3.
ausize = Metadata AU size in megabytes.
nodes = Number of nodes in cluster.
clients – Number of database instances for each node.
disks – Number of disks in disk group.
For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require 1684 MB of space for ASM metadata

[2 * 1 * 3]
+ [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)]
= 1684 MB

OCR and Voting disks – recommendations

  • use high redundancy for OCR and Voting disks –  the correct function of your cluster depends on it!
  • use 5 disks with 10 GB each – enough space for all files plus asm metadata plus space for futher growth

Checklist

  • Storage visible
  • user and groups created
  • Kernel parameters configured
  • RPM Packages checked / installed
  • NTP working
  • DNS working
  • Connection (ping, ssh) between nodes working?
  • Backup available for rollback?

–> Alright! Lets start binary installation

Installing

Start installation as user “grid” (on one node (here on node “rac1”))

step2_022
step2_023
step2_024
step2_025

Remember: We choose not to use GNS; so it is deselected

step2_026

The node the installer was started is already added by default; add here all other nodes (in our case we added “rac2”)
step2_027

Click on “SSH Connectivity”, enter username and password and click on “Setup”
step2_028

If everything worked the following message appears
step2_029

If there are problems check:

  • Group ID and User ID on both nodes
  • Connectivity between both nodes
  • Passwords

Select which interface is the public and which the private one
step2_030

Where to place OCR and Voting disk… in our case we use ASM for everything
step2_031

For storing OCR and Voting disk we need to create a data group; our first data group is called “DATA1” and consists of the four LUNs we prepared and labeled before… here we see the disk names we labeled the disks with again. We choose “normal” redundancy which will create a mirror.
step2_032

Specify passwords for ASM and ASMSNMP.. choose strong passwords if possible (i was lazy and chose not that strong ones – acceptable for educational purposes but not in real productive scenarios)
step2_033

Grid Infrastructure can use IPMI for fencing… VMWARE does not have IPMI
step2_034

group mapping…for role separation… we have only “dba” ? change accordingly to your needs
step2_035

Set Oracle Base and software (install) location… software location must not be under oracle base location… else the installer throws an error saying so
step2_036

Inventoriy location…
step2_037

Make sure you fix every issue reported here (memory and swap size are limited on virtual machine so this is not fixable…but should anyway)

step2_038

Ready..
step2_039

Installing…
step2_040

Post-Installation scripts to be started in the following order:

  • orainstRoot.sh on node rac1
  • orainstRoot.sh on node rac2
  • root.sh on node rac1
  • root.sh on node rac2

step2_041

Sample of root.sh output

step2_042

The full output can be found rac-install-node1 and rac-install-node2.

If everything works as expected the result should look like this:
step2_043

FINISHED
step2_044

If there are problems:

  • Look at the log files located on /u01/app/oraInventory/logs
  • Fix the issues noted here
  • If this does not work out: Search Metalink / Open SR

Where to go now?

  • We just installed the Infrastructure needed for RAC, i.e. ASM and Clusterware
  • Install diagnostic utilities (strongly recommended)
    • OSWatcher
    • and RACDDT
  • Tune Kernel parameters (if not done before)
  • Create at least TWO more disk groups:
    • one for holding database files (i.e. datafiles and binary installation files)
    • one to be used as flashback recovery area
  • Backup current configuration
  • some ADVM (ASM Dynamic Volume Manager) and ACFS (ASM Cluster File system) foundations can be found here
  • now we need to install a RAC database – this is covered here and here
Categories: Oracle 11g Release 2 Tags:

Oracle 11 Release 2 Install Guide – Update 14th September 2009

September 14th, 2009 No comments

The install guide was updated. The presentation is already up-to-date.

 

I will also post the topics from the presentation as separated posts in this blog.

For a better overview i created a separate page to collect and sort all install guides. The page can be found here.

Categories: Oracle 11g Release 2 Tags:

Oracle 11g Release 2 Install Guide – Single database creation guide

September 14th, 2009 No comments

Oracle 11g Release 2 – Single database creation guide

 

The first part (binary installation of the database) is available here.

 

Start default listener

step1_020

Create single database instance
step1_021
step1_022
step1_023
step1_024
step1_025
step1_026
step1_027
step1_028
step1_029
step1_030
step1_031
step1_032
step1_033
step1_034
step1_035
step1_036
step1_037
step1_038
step1_039
step1_040

Categories: Oracle 11g Release 2 Tags:

Oracle 11g Release 2 Install Guide – Single database binary installation

September 14th, 2009 2 comments

Oracle 11g Release 2 – Single database binary installation guide

 

Local System configuration

Installation took place on laptop with CentOS 5 x86_64 (Kernel 2.6.18-128.2.1.el5)

  • 4 GB Memory
  • 500 GB local disk

 

Steps required to install Oracle 11g Release 2

  • Configure Storage
  • Check and fulfill pre-requirements
  • Binary installation of database
  • Listener configuration
  • Creation of database

 

Check and fullfill requirements

Storage Requirements

  • As always – recommendation: SAME (stripe and mirror everything)
  • Valid storage options for single database instance:
    • file system (ext3, reiser, xfs, etc al)
    • ASM
    • ACFS (!! only for non-database files (i.e. Binary files, trace files and so on!!)
    • NFS
    • ISCSI
    • RAW Disks (read documentation; deprecated option!)

 

SWAP

  • Between 1 and 2 GB RAM –> SWAP 1.5 times the size of RAM
  • Between 2 and 16 GB RAM –> SWAP equal to size of RAM
  • > 16 GB RAM –> 16 GB SWAP

Automatic Memory Management

  • Required /dev/shm with appropriate size (i.e. SGA of 16 GB required /dev/shm to be 16 GB+)
  • Huge Pages and autom. Memory Management are INCOMPATIBLE

 

Supported Operating Systems

  • on 32-bit Linux
    • Asianux 2 Update 7 (Kernel 2.6.9 or later)
    • Asianux 3 (Kernel 2.6.18 or later)
    • Oracle Enterprise Linux 4 Update 7  (Kernel 2.6.9 or later)
    • Oracle Enterprise Linux 5 Update 2 (Kernel 2.6.18 or later)
    • Red Hat Enterprise Linux 4 Update 7 (Kernel 2.6.9 or later)
    • Red Hat Enterprise Linux 5 Update 2 (Kernel 2.6.18 or later)
    • SUSE Linux Enterprise Server 10 SP2 (Kernel 2.6.16.21 or later)
    • SUSE Linux Enterprise Server 11 (2.6.27.19 or later)

!! ACFS and ADVM are ONLY supported on RHEL 5  and OEL 5 !!

 

  • on 64-bit Linux
    • Asianux 2 (Kernel 2.6.9 or later)
    • Asianux 3 (Kernel 2.6.18 or later)
    • Oracle Enterprise Linux 4 Update 7 (Kernel 2.6.9 or later)
    • Oracle Enterprise Linux 5 Update 2 (Kernel 2.6.18 or later)
    • Red Hat Enterprise Linux 4 Update 3 (Kernel 2.6.9 or later)
    • Red Hat Enterprise Linux 5 Update 2 (Kernel 2.6.18 or later)
    • SUSE Linux Enterprise Server 10 SP2 (Kernel 2.6.16.21 or later)
    • SUSE Linux Enterprise Server 11 (2.6.27.19 or later)

!! ACFS and ADVM are ONLY supported on RHEL 5 and OEL 5 !!

 

Required Packages

refer to: http://download.oracle.com/docs/cd/E11882_01/install.112/e10840/pre_install.htm

Users and Groups

For separation of rights (i.e. Manage ASM storage, manage database instance)

Available groups:

  • OSDBA (typical: “dba”; have SYSDBA privileges on database)
  • SYSOPER (typical: “sysoper”; optional, limited set of administrative priv)
  • OSDBA for ASM (typical: “asmdba”; full administrative access to ASM instance)
  • OSASM for ASM (typical: “asmadmin”; administrative access to ASM instance via SQL*Net)
  • OSOPER for ASM (typical: “asmoper”; optional, like SYSOPER group for limited access)

 

  • Minimal group needed: OSDBA group
    (in this document the osdba group is named “dba”)

 

Shell Limits

  • In /etc/security/limits.conf

oracle              soft     nproc    16384
oracle              hard    nproc    16384
oracle              soft     nofile   65536
oracle              hard    nofile   65536

(replace “oracle” with user holding the installation)

  • In /etc/pam.d/login add if not existssession    required     pam_limits.so

 

Kernel Limits (MINIMUM values) in /etc/sysctl.conf

kernel.sem=250 32000 100 128
kernel.shmall=2097152
kernel.shmmax=536870912
kernel.shmmni=4096
fs.file-max=6815744
fs.aio-max-nr=1048576
net.ipv4.ip_local_port_range=9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576

– SuSE only –
vm.hugetlb_shm_group=<gid of osdba group>

The values in /etc/sysctl.conf should be tuned (i.e. according to the number of instance, available memory, number of connections,…)

 

Kernel Limits – how to calculate them

The values in /etc/sysctl.conf should be tuned (i.e. according to the number of instance, available memory, number of connections,…)

see Part 2 for guides how to calculate the kernel parameters

 

User Profile file (minimum file) ~/.bash_profile (RHEL, OEL) or ~/.profile (SuSE)

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2/ora11r2p
export ORACLE_SID=ORA11R2P
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
umask 022

ATTENTION: This profile file is for use with a dedicated user for each database binary installation. If you wish to install several binary installation under one single user make sure ORACLE_HOME and TNS_ADMIN is unset

 

Installing the database binaries 

We will install an Oracle 11g Release 2 database (stand-alone without grid infrastructure, without asm), operating system user is named “ora11”, osdba group named “dba” with home directory “/u01/app/oracle/product/11.2.0/ora11”.

Note: According to OFA the directory shall be named “…11.2.0/db_1” For easy management we install a dedicated binary installation under a dedicated user for every database which runs on the system and name them accordingly.

For instance binary database installation for database with SID “ORA11” is held by operating system user named “ora11” with home directory “…11.2.0/ora11”.
The binary installation for the database with SID “ORA11T” is held by the user named “ora11t” with home directory “@…11.2.0/ora11t”.

You can of course use one single user on operating system level for having one or more binary installations. It´s up to you.
 

Create User

 mkdir -p /u01/app/oracle/product/11.2.0/ora11
 useradd -g dba -d /u01/app/oracle/product/11.2.0/ora11 ora11
 passwd ora11

Create profile file

 export ORACLE_BASE=/u01/app/oracle
 export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/ora11
 export ORACLE_SID=ORA11
 export NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
 export TMP=$ORACLE_HOME/tmp
 export TNS_ADMIN=$ORACLE_HOME/network/admin
 export TEMP=$TMP
 export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH

 

Check, if X11 works

export DISPLAY=localhost:0.0
xterm

Check Limits

ulimit -a         (as user “ora11”)

 step1_001

Note: “open files” and “max user processes” should show values greater or equial the  requisites.

 

Start installer and perform binary installation

 
step1_002



step1_003
step1_004
step1_005
step1_006
step1_007
step1_008
step1_009
step1_010
step1_011
step1_012
step1_013
step1_014
step1_015
step1_016
step1_017
step1_018
step1_019

 

Thats it!

 

The next step would be creating the database itself. A guide how to do this can be found here.

Categories: Oracle 11g Release 2 Tags: