Oracle 11g Release 2 Install Guide – Grid Infrastructure installation

System configuration:

Hardware configuration

  • two virtual machines (VMWARE)
  • 1 vCPU
  • 2 GB RAM –> bare minimum possible
  • 40 GB Disk
  • Storage exported via ISCSI
    • 4 LUNs with 2 GB each
    • 2 LUNs with 30 GB each

Operating system configuration

  • Oracle Enterprise Linux 5.3 x86_64 (Kernel 2.6.18-128.el5)
  • Installed packages: default system + development packages

Grid Infrastructure configuration:

  • Cluster Name: “RAC”
  • Binary installation on local disk
  • OCR, Voting and datafiles stored in ASM

Steps to install Oracle 11g Release 2 Grid Infrastructure

  • Configure Linux and pre-requirements
  • Configure Storage
  • Binary installation of grid infrastructure
  • Installation of Oracle 11g Release 2 Database (either single or rac installation)

Configure Linux and pre-requirements

SWAP

  • Between 1 and 2 GB RAM –>SWAP 1.5 times the size of RAM
  • Between 2 and 16 GB RAM –> equal to size of RA
  • > 16 GB RAM –> 16 GB SWAP

Memory

  • according to grid infrastructure documentation “>= 1 GB Memory”
  • bare minumum from authors experience:
    • 1 GB for grid infrastructure components
    • 500 MB for operating system
    • 1 GB for cluster database SGA/PGA/UGA
    • = 2,5 GB bare minimum!

See below for memory consumption with grid infrastructure installed: > 800 MB for infrastructure processes

step2_010

Automatic Memory Management

Required /dev/shm with appropriate size (i.e. SGA of 16 GB required /dev/shm to be 16 GB+)

Huge Pages and autom. Memory Management are INCOMPATIBLE

Checking required packages

(see required packages for single database installation; this applies here as well cause we will end up install a database in the end)

According to the documentation the following packages are needed:

  • binutils-2.17.50.0.6
  • compat-libstdc++-33-3.2.3
  • compat-libstdc++-33-3.2.3 (32 bit)
  • elfutils-libelf-0.125
  • elfutils-libelf-devel-0.125
  • gcc-4.1.2, gcc-c++-4.1.2
  • glibc-2.5-24, glibc-2.5-24 (32 bit)
  • glibc-common-2.5
  • glibc-devel-2.5
  • glibc-devel-2.5 (32 bit)
  • glibc-headers-2.5
  • ksh-20060214
  • libaio-0.3.106
  • libaio-0.3.106 (32 bit)
  • libaio-devel-0.3.106
  • libaio-devel-0.3.106 (32 bit)
  • libgcc-4.1.2, libgcc-4.1.2 (32 bit)
  • libstdc++-4.1.2
  • libstdc++-4.1.2 (32 bit)
  • libstdc++-devel 4.1.2
  • make-3.81
  • sysstat-7.0.2
  • unixODBC-2.2.11
  • unixODBC-2.2.11 (32 bit)
  • unixODBC-devel-2.2.11
  • unixODBC-devel-2.2.11 (32 bit)

On sample system with OEL 5.3 and default + development packages installed only the following rpms were missing:

rpm -ihv libaio-devel-0.3.106-3.2.* libstdc++43-devel-4.3.2-7.el5.* sysstat-7.0.2-3.el5.x86_64.rpm unixODBC-2.2.11-7.1.* unixODBC-devel-2.2.11-7.1.*

Shell Limits

/etc/security/limits.conf

grid                  soft     nproc    16384
grid                  hard    nproc    16384
grid                  soft     nofile   65536
grid                  hard    nofile   65536
grid                  soft     stack      10240
grid                 hard    stack      10240

In /etc/pam.d/login add if not exists

session    required     pam_limits.so

Kernel Limits (MINIMUM values) in /etc/sysctl.conf

kernel.sem=250 32000 100 128
kernel.shmall=2097152
kernel.shmmax=536870912
kernel.shmmni=4096
fs.file-max=6815744
fs.aio-max-nr=1048576
net.ipv4.ip_local_port_range=9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576

– SuSE only –
vm.hugetlb_shm_group=<gid of osdba group>

The values in /etc/sysctl.conf should be tuned (i.e. according to the number of instance, available memory, number of connections,…)

Kernel Limits on Linux (Calculate them)

kernel.sem

semmns = Total number of semaphores systemwide =

2 * sum (process parameters of all database instances on the system)
+ overhead for background processes
+ system and other application requirements

semmsl = total semaphoren for each set

semmni = total semaphore sets =  semmns divided by semmsl, rounded UP to nearest multiple to 1024

kernel.sem = <semmsl semmns semopm semmni>

semmsl    =  set to 256
semmns    = set total number of semaphoren (see above!)
semopm    = 100; in documentation not explicitly described
semmni    = see calculcation above

kernel.shmall

kernel.shmall = This parameter sets the total amount of shared memory pages that  can be used system wide. Hence, SHMALL should always be at least   ceil(shmmax/PAGE_SIZE). PAGE_SIZE is usually 4096 bytes unless you use Big Pages or Huge  Pages which supports the configuration of larger memory pages.  (quoted from: www.puschitz.com/TuningLinuxForOracle.shtml)

kernel.shmmax

kernel.shmmax = the maximum size of a single shared memory segment in bytes that
a linux process can allocate
If not set properly database startup can fail with:

ORA-27123: unable to attach to shared memory segment

kernel.shmmni

kernel.shmmni = system wide number of shared memory segments; Oracle recommendation for 11g Release 1 “at least to 4096”; i did not found anything for Release 2….

fs.file-max

fs.file-max = maximum number of open files system-wide; must be at least %G„%@6815744$(B!H(B

fs.aio-max-nr

fs.aio-max-nr = concurrent outstanding i/o requests; must be set to %G„%@1048576$(B!H(B

net.ipv4.ip_local_port_range

net.ipv4.ip_local_port_range = mimimum and maximum ports for use; must be set to  minimal “9000” and “65500” as maximum

net.core.rmem_default

net.core.rmem_default = the default size in bytes of the receive buffer; must  be set at least to “262144”

net.core.rmem_max

net.core.rmem_max = the maximum size in bytes of the receive buffer; must be set at least to “@4194304”

net.core.wmem_default

net.core.wmem_default = the default size in bytes of the send buffer; must be set at least to “262144”

net.core.wmem_max

net.core.wmem_max = the maximum size in bytes of the send buffer; must be set at least to “1048576”

Networking

Basic facts

  • Works completely different than 10g or 11g R1!
  • At least two separated networks (public and private) and therefore two network interfaces required
  • ATTENTION: Interface names must be equal on ALL nodes! (i.e. If private network interface on node A is eth2 the private network interface name on all other nodes must be eth2 as well…. )
  • Recommendation: Use bonding for:
    • Static naming (even if you use only one interface per bond)
    • Failover / Load Sharing
    • –> we will use network bonding with only one interface in the following
  • IP adresses can be given by two schemes:
    • GNS (grid naming service) –> automatic ip numbering
    • Manual Mode
    • –> we will use manual ip adressing mode in the following
  • GNS mode requires:
    • one fixed public IP for each node
    • one dhcp virtual IP for each node
    • one hdcp for fixed private IP for each node
    • three dhcp IP for the SCAN
    • Thougths by the author:
      • new
      • more complex
      • if working quite easy adding of an node; at least from the ip numbering point of view %G–%@ but how often do you add a node?
  • Manual Mode ip adressing requires:
    • one public IP for each node
    • one virtual IP for each node
    • one private IP for each node
    • one to three (recommended) IPs for providing the SCAN name

Naming schema used in the following (remember: 2-node-cluster)

step2_011

Configure Network Bonding

In /etc/modprobe.conf add line:

alias bond0 bonding
alias bond1 bonding
options bonding miimon=100 mode=1 max-bonds=2
(“mode=1” means active/passive failover… see “bonding.txt” in kernel sources for more options)
/etc/sysconfig/network-scripts/ifcfg-bond0 looks like:

DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.180.0
NETMASK=255.255.255.0
IPADDR=192.168.180.10
USERCTL=no
/etc/sysconfig/network-scripts/ifcfg-eth0 looks like:

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=yes

(Note: Add a second interface to achive real fault tolerance…. for our testing environment we use bonding to provide a consistent name schema)

The configuration for bond1 is not shown… just alter interface names and IPs.

Configure NTP

Grid Infrastructure provides ntp-like time synchronization with “ctss” (cluster time synchronization service) ctssd is provided in case connections to ntp servers are not possible

If no running (“chkconfig ntpd off” and configured “rm /etc/ntp.conf” ntpd is found ctssd will be used; if ntpd is found  ctssd will start in observer mode.

ATTENTION: Set the “-x” flag if you use ntp to prevent ntp from stepping the clock in /etc/sysconfig/ntpd!

Check if NTP is working

  • start “ntpq”
  • enter “opeer” to see list of all peers
    In our example two peers: host “nb-next-egner” and the local clock

step2_012

enter “as” to see associations
“sys.peer” means the clock is synchronized against this; the order in which the entries apper is like “opeer” – so first entry means host “nb-next-egner” – fine!
reject means not synchronized against due to various reasons
enter “rv” for detailed information

step2_013

SCAN

  • SCAN = Single Client Access Name; new concept in 11g R2
  • DNS-based
  • nameing notation: <name of cluster>-scan.<domain>
  • for our cluster named  “rac” with domain “regner.de” this is    rac-scan.regner.de
  • You need at least ONE – better three IPs for the new database access schema called SCAN
  • IPs are configured in DNS (forward and reverse lookup);
  • !! using local hosts file failed verification after grid installation !!
  • forward- and reverse lookup needs to be configured
  • excerpt from zone file:
    rac-scan          IN A          192.168.180.6
    rac-scan          IN A          192.168.180.7
    rac-scan          IN A          192.168.180.8

After installation we will find three listeners running from grid infrastructure home:

bash# srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac1
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node rac2
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node rac2

Connection to database “RAC11P” using SCAN would use this tnsnames entry:

RAC11P =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=rac-scan.regner.de)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=RAC11P))
)

The “old fashioned” way still works:

RAC11P_old =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=rac1-vip.regner.de)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=rac2-vip.regner.de)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=RAC11P))
)

Connecting to a named instance:

RAC11P =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=rac-scan.regner.de)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=RAC11P)
(INSTANCE_NAME=RAC11P1))
)

Check DNS for SCAN

Update [16th October 2009]: If you do not have a working DNS server available refer here to set up your own,

Forward lookup

Use “dig” to check: “dig rac-scan.regner.de”

step2_014

Reverse lookup

Use “dig -x” to check

dig -x 192.168.180.6
dig -x 192.168.180.7
dig -x 192.168.180.8

step2_015

Create User and Group

Create Group

groupadd -g 500 dba

Note: For educational purposes we use only one group. In productive enviroments there should be more groups to separate administrative duties.

Create User

mkdir -p /u01/app/11.2.0/grid
chown -R root:dba /u01
chmod -R 775 /u01
chown -R grid:dba /u01/app/11.2.0/grid
useradd -g dba -u 500 -d /u01/app/11.2.0/grid grid
passwd grid

Note: Oracle recommends different users for grid and database installation!

Make sure groupid and userid are the same on ALL nodes!

Create profile file (~/.bash_profile or ~/.profile on SuSE) for user “grid”

umask 022
if [ -t 0 ]; then
stty intr ^C
fi

Prepare and Configure Storage

  • Requirements
    • must be visible on all nodes
    • as always – recommendation: SAME (stripe and mirror everything)
  • What to store where:
    • OCR and Voting disk
      • ASM
      • NFS
      • RAW disks (deprecated; read doucmentation!)
    • Oracle Clusterware binaries
      • NFS
      • Local disk
    • Oracle RAC binaries
      • ACFS
      • NFS
      • local disk
    • Oracle database files
      • ASM
      • NFS
      • RAW disks
    • Oracle recovery files
      • ASM
      • NFS

Install RPMs

  • oracleasmsupport
  • oracleasmlib
  • oracleasm-<kernel-version>
    (see “Sources” for download locatio)

Configure ASM

/usr/sbin/oracleasm configure -i

step2_016

init ASM

/usr/sbin/oracleasm init

Create Partitions on disk with fdisk

In the following example disk /dev/sde (this is our iSCSI storage) does not contain a partition at all  – we will create one

step2_017

Create one whole disk partition on /dev/sde

step2_018

Label all disks with asm label

step2_019

Query disks on all nodes – Node “rac1”

step2_020

–> all disks visible with correct label

Query disks on all nodes – Node “rac2” (the other node)

–> also all four LUNs visible

step2_021

OCR and Voting disks

  • Will be placed in ASM (new in 11g R2)
  • three different redundancy levels:
    • External – 1 disk minimum needed
    • Normal – 3 disks minumum needed
    • High  – 5 disks minimum needed
  • Storage Requirments
    • External – 280 MB OCR + 280 MB Voting Disk
    • Normal – 560 MB OCR + 840 MB Voting Disk
    • High  – 840 MB OCR + 1,4 GB Voting Disk
    • plus Overhead for ASM Metadata

Overhead for ASM metadata

total =
[2 * ausize * disks]
+ [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

redundancy = Number of mirrors: external = 1, normal = 2, high = 3.
ausize = Metadata AU size in megabytes.
nodes = Number of nodes in cluster.
clients – Number of database instances for each node.
disks – Number of disks in disk group.
For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require 1684 MB of space for ASM metadata

[2 * 1 * 3]
+ [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)]
= 1684 MB

OCR and Voting disks – recommendations

  • use high redundancy for OCR and Voting disks –  the correct function of your cluster depends on it!
  • use 5 disks with 10 GB each – enough space for all files plus asm metadata plus space for futher growth

Checklist

  • Storage visible
  • user and groups created
  • Kernel parameters configured
  • RPM Packages checked / installed
  • NTP working
  • DNS working
  • Connection (ping, ssh) between nodes working?
  • Backup available for rollback?

–> Alright! Lets start binary installation

Installing

Start installation as user “grid” (on one node (here on node “rac1”))

step2_022
step2_023
step2_024
step2_025

Remember: We choose not to use GNS; so it is deselected

step2_026

The node the installer was started is already added by default; add here all other nodes (in our case we added “rac2”)
step2_027

Click on “SSH Connectivity”, enter username and password and click on “Setup”
step2_028

If everything worked the following message appears
step2_029

If there are problems check:

  • Group ID and User ID on both nodes
  • Connectivity between both nodes
  • Passwords

Select which interface is the public and which the private one
step2_030

Where to place OCR and Voting disk… in our case we use ASM for everything
step2_031

For storing OCR and Voting disk we need to create a data group; our first data group is called “DATA1” and consists of the four LUNs we prepared and labeled before… here we see the disk names we labeled the disks with again. We choose “normal” redundancy which will create a mirror.
step2_032

Specify passwords for ASM and ASMSNMP.. choose strong passwords if possible (i was lazy and chose not that strong ones – acceptable for educational purposes but not in real productive scenarios)
step2_033

Grid Infrastructure can use IPMI for fencing… VMWARE does not have IPMI
step2_034

group mapping…for role separation… we have only “dba” ? change accordingly to your needs
step2_035

Set Oracle Base and software (install) location… software location must not be under oracle base location… else the installer throws an error saying so
step2_036

Inventoriy location…
step2_037

Make sure you fix every issue reported here (memory and swap size are limited on virtual machine so this is not fixable…but should anyway)

step2_038

Ready..
step2_039

Installing…
step2_040

Post-Installation scripts to be started in the following order:

  • orainstRoot.sh on node rac1
  • orainstRoot.sh on node rac2
  • root.sh on node rac1
  • root.sh on node rac2

step2_041

Sample of root.sh output

step2_042

The full output can be found rac-install-node1 and rac-install-node2.

If everything works as expected the result should look like this:
step2_043

FINISHED
step2_044

If there are problems:

  • Look at the log files located on /u01/app/oraInventory/logs
  • Fix the issues noted here
  • If this does not work out: Search Metalink / Open SR

Where to go now?

  • We just installed the Infrastructure needed for RAC, i.e. ASM and Clusterware
  • Install diagnostic utilities (strongly recommended)
    • OSWatcher
    • and RACDDT
  • Tune Kernel parameters (if not done before)
  • Create at least TWO more disk groups:
    • one for holding database files (i.e. datafiles and binary installation files)
    • one to be used as flashback recovery area
  • Backup current configuration
  • some ADVM (ASM Dynamic Volume Manager) and ACFS (ASM Cluster File system) foundations can be found here
  • now we need to install a RAC database – this is covered here and here

Oracle 11g Release 2 Install Guide – Single database binary installation

Oracle 11g Release 2 – Single database binary installation guide

 

Local System configuration

Installation took place on laptop with CentOS 5 x86_64 (Kernel 2.6.18-128.2.1.el5)

  • 4 GB Memory
  • 500 GB local disk

 

Steps required to install Oracle 11g Release 2

  • Configure Storage
  • Check and fulfill pre-requirements
  • Binary installation of database
  • Listener configuration
  • Creation of database

 

Check and fullfill requirements

Storage Requirements

  • As always – recommendation: SAME (stripe and mirror everything)
  • Valid storage options for single database instance:
    • file system (ext3, reiser, xfs, etc al)
    • ASM
    • ACFS (!! only for non-database files (i.e. Binary files, trace files and so on!!)
    • NFS
    • ISCSI
    • RAW Disks (read documentation; deprecated option!)

 

SWAP

  • Between 1 and 2 GB RAM –> SWAP 1.5 times the size of RAM
  • Between 2 and 16 GB RAM –> SWAP equal to size of RAM
  • > 16 GB RAM –> 16 GB SWAP

Automatic Memory Management

  • Required /dev/shm with appropriate size (i.e. SGA of 16 GB required /dev/shm to be 16 GB+)
  • Huge Pages and autom. Memory Management are INCOMPATIBLE

 

Supported Operating Systems

  • on 32-bit Linux
    • Asianux 2 Update 7 (Kernel 2.6.9 or later)
    • Asianux 3 (Kernel 2.6.18 or later)
    • Oracle Enterprise Linux 4 Update 7  (Kernel 2.6.9 or later)
    • Oracle Enterprise Linux 5 Update 2 (Kernel 2.6.18 or later)
    • Red Hat Enterprise Linux 4 Update 7 (Kernel 2.6.9 or later)
    • Red Hat Enterprise Linux 5 Update 2 (Kernel 2.6.18 or later)
    • SUSE Linux Enterprise Server 10 SP2 (Kernel 2.6.16.21 or later)
    • SUSE Linux Enterprise Server 11 (2.6.27.19 or later)

!! ACFS and ADVM are ONLY supported on RHEL 5  and OEL 5 !!

 

  • on 64-bit Linux
    • Asianux 2 (Kernel 2.6.9 or later)
    • Asianux 3 (Kernel 2.6.18 or later)
    • Oracle Enterprise Linux 4 Update 7 (Kernel 2.6.9 or later)
    • Oracle Enterprise Linux 5 Update 2 (Kernel 2.6.18 or later)
    • Red Hat Enterprise Linux 4 Update 3 (Kernel 2.6.9 or later)
    • Red Hat Enterprise Linux 5 Update 2 (Kernel 2.6.18 or later)
    • SUSE Linux Enterprise Server 10 SP2 (Kernel 2.6.16.21 or later)
    • SUSE Linux Enterprise Server 11 (2.6.27.19 or later)

!! ACFS and ADVM are ONLY supported on RHEL 5 and OEL 5 !!

 

Required Packages

refer to: http://download.oracle.com/docs/cd/E11882_01/install.112/e10840/pre_install.htm

Users and Groups

For separation of rights (i.e. Manage ASM storage, manage database instance)

Available groups:

  • OSDBA (typical: “dba”; have SYSDBA privileges on database)
  • SYSOPER (typical: “sysoper”; optional, limited set of administrative priv)
  • OSDBA for ASM (typical: “asmdba”; full administrative access to ASM instance)
  • OSASM for ASM (typical: “asmadmin”; administrative access to ASM instance via SQL*Net)
  • OSOPER for ASM (typical: “asmoper”; optional, like SYSOPER group for limited access)

 

  • Minimal group needed: OSDBA group
    (in this document the osdba group is named “dba”)

 

Shell Limits

  • In /etc/security/limits.conf

oracle              soft     nproc    16384
oracle              hard    nproc    16384
oracle              soft     nofile   65536
oracle              hard    nofile   65536

(replace “oracle” with user holding the installation)

  • In /etc/pam.d/login add if not existssession    required     pam_limits.so

 

Kernel Limits (MINIMUM values) in /etc/sysctl.conf

kernel.sem=250 32000 100 128
kernel.shmall=2097152
kernel.shmmax=536870912
kernel.shmmni=4096
fs.file-max=6815744
fs.aio-max-nr=1048576
net.ipv4.ip_local_port_range=9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576

– SuSE only –
vm.hugetlb_shm_group=<gid of osdba group>

The values in /etc/sysctl.conf should be tuned (i.e. according to the number of instance, available memory, number of connections,…)

 

Kernel Limits – how to calculate them

The values in /etc/sysctl.conf should be tuned (i.e. according to the number of instance, available memory, number of connections,…)

see Part 2 for guides how to calculate the kernel parameters

 

User Profile file (minimum file) ~/.bash_profile (RHEL, OEL) or ~/.profile (SuSE)

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2/ora11r2p
export ORACLE_SID=ORA11R2P
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
umask 022

ATTENTION: This profile file is for use with a dedicated user for each database binary installation. If you wish to install several binary installation under one single user make sure ORACLE_HOME and TNS_ADMIN is unset

 

Installing the database binaries 

We will install an Oracle 11g Release 2 database (stand-alone without grid infrastructure, without asm), operating system user is named “ora11”, osdba group named “dba” with home directory “/u01/app/oracle/product/11.2.0/ora11”.

Note: According to OFA the directory shall be named “…11.2.0/db_1” For easy management we install a dedicated binary installation under a dedicated user for every database which runs on the system and name them accordingly.

For instance binary database installation for database with SID “ORA11” is held by operating system user named “ora11” with home directory “…11.2.0/ora11”.
The binary installation for the database with SID “ORA11T” is held by the user named “ora11t” with home directory “@…11.2.0/ora11t”.

You can of course use one single user on operating system level for having one or more binary installations. It´s up to you.
 

Create User

 mkdir -p /u01/app/oracle/product/11.2.0/ora11
 useradd -g dba -d /u01/app/oracle/product/11.2.0/ora11 ora11
 passwd ora11

Create profile file

 export ORACLE_BASE=/u01/app/oracle
 export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/ora11
 export ORACLE_SID=ORA11
 export NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
 export TMP=$ORACLE_HOME/tmp
 export TNS_ADMIN=$ORACLE_HOME/network/admin
 export TEMP=$TMP
 export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH

 

Check, if X11 works

export DISPLAY=localhost:0.0
xterm

Check Limits

ulimit -a         (as user “ora11”)

 step1_001

Note: “open files” and “max user processes” should show values greater or equial the  requisites.

 

Start installer and perform binary installation

 
step1_002



step1_003
step1_004
step1_005
step1_006
step1_007
step1_008
step1_009
step1_010
step1_011
step1_012
step1_013
step1_014
step1_015
step1_016
step1_017
step1_018
step1_019

 

Thats it!

 

The next step would be creating the database itself. A guide how to do this can be found here.

Oracle 11g Release 2 Install Guide – Overview

Oracle 11g Release 2 – Overview

New Features

  • New Features – a non-complete overview:
    • Automatic Block repair in data guard scenarios (i.e. Replace defective block on standby with valid block from primary database or vice versa)
    • Automatic determance of parallelism degree based on object size, query complexity and hardware resouces
    • RAC installation complexity dramatically reduced – we will evaluate this :-)
    • Storage of Oracle Cluster Registry and Voting Disk in ASM
    • Oracle Restart (automatically (re)starts database instance, ASM, listener and so on)
    • Utility for complete uninstallation of Oracle RAC
    • Reduced (“zero downtime”) for patching RAC clusters
    • Instance Caging: Limit number of CPUs used
    • Scheduler improvements (emailing, file watch, run procedures on remote db)
    • ACFS (ASM Cluster File System): Cluster file system based on ASM for non-database datafiles, i.e. (Oracle) Binary installations, trace files, alert logs and so on
    • ASM Dynamic Volume Manager: create volumes out of disk groups and use it to create a file system (ext3, reiserfs, etc pp) on top of it
    • ASM FS Snapshots
    • Intelligent data placement: frequently accessed blocks are placed on the edge of the disk where I/O performance is higher by ASM
    • ASM File Access Control
    • New compression algorithm (LZO) offers fast compression and de-compression
    • most “cool” ASM features require 11g release 2 database and asm compatible level (= everything 11.2)

Oracle 11g R 2 – Components

  • Oracle Database
    • traditional database (rdbms)
    • ASM module
    • Listener et al
  • Oracle Infrastructure (=foundation for Oracle RAC), includes:
    • Clusterware
    • ASM Module
    • Oracle Restart

?? Is 11g Release 2 ready for production yet ??

  • From the Authors point of view: NO!
  • Release of 11g Release 2 was driven by marketing and promise “release in September 2009”
  • Latest beta of 11g R2 (released two weeks before “productive release” still contained critical bugs
  • 11.2.0.1.0 good for testing and educational purpose
  • For productive environments:
    • Wait at least until 11.2.0.2.0! (i.e. The first patchset)

Oracle 11g Release Installation HowTo

Oracle 11g Release 2 was released on 1st of September 2009. From my point of view this version will be the next widely used version after 10 Release 2. In myopinion 11g Release 1 is a quite good and stable version – but it is a “Release 1” and many ISVs waited for 11g R2 to apperas to certify their products against.

 

I am investigating Oracle 11g Release 2 over the next few weeks starting with a single database installation, installing a 2-node-rac with ASM, testing ADVM and ACFS. Finally i will investigate database features, for instance advanced compression.

I will continuously publish my experiences on this blog.

At this point (14th September) the document covers:

  • oracle 11g release overview
  • new features
  • single database installation
  • grid infrastructure installation (formerly known as “clusterware”)
  • Installing Oracle 11g Release 2 database binaries on ACFS
  • How to size ASM disk groups of OCR and Voting disks correctly

 

The most recent version can be found here:

oracle-11g-release2-installguide-2009-09-14

 

Hopefully today i will add the creation of a rac database as well.

 

Please leave me a note or send me an email if you have any feedback.