System configuration:
Hardware configuration
- two virtual machines (VMWARE)
- 1 vCPU
- 2 GB RAM –> bare minimum possible
- 40 GB Disk
- Storage exported via ISCSI
- 4 LUNs with 2 GB each
- 2 LUNs with 30 GB each
Operating system configuration
- Oracle Enterprise Linux 5.3 x86_64 (Kernel 2.6.18-128.el5)
- Installed packages: default system + development packages
Grid Infrastructure configuration:
- Cluster Name: “RAC”
- Binary installation on local disk
- OCR, Voting and datafiles stored in ASM
Steps to install Oracle 11g Release 2 Grid Infrastructure
- Configure Linux and pre-requirements
- Configure Storage
- Binary installation of grid infrastructure
- Installation of Oracle 11g Release 2 Database (either single or rac installation)
Configure Linux and pre-requirements
SWAP
- Between 1 and 2 GB RAM –>SWAP 1.5 times the size of RAM
- Between 2 and 16 GB RAM –> equal to size of RA
- > 16 GB RAM –> 16 GB SWAP
Memory
- according to grid infrastructure documentation “>= 1 GB Memory”
- bare minumum from authors experience:
- 1 GB for grid infrastructure components
- 500 MB for operating system
- 1 GB for cluster database SGA/PGA/UGA
- = 2,5 GB bare minimum!
See below for memory consumption with grid infrastructure installed: > 800 MB for infrastructure processes
Automatic Memory Management
Required /dev/shm with appropriate size (i.e. SGA of 16 GB required /dev/shm to be 16 GB+)
Huge Pages and autom. Memory Management are INCOMPATIBLE
Checking required packages
(see required packages for single database installation; this applies here as well cause we will end up install a database in the end)
According to the documentation the following packages are needed:
- binutils-2.17.50.0.6
- compat-libstdc++-33-3.2.3
- compat-libstdc++-33-3.2.3 (32 bit)
- elfutils-libelf-0.125
- elfutils-libelf-devel-0.125
- gcc-4.1.2, gcc-c++-4.1.2
- glibc-2.5-24, glibc-2.5-24 (32 bit)
- glibc-common-2.5
- glibc-devel-2.5
- glibc-devel-2.5 (32 bit)
- glibc-headers-2.5
- ksh-20060214
- libaio-0.3.106
- libaio-0.3.106 (32 bit)
- libaio-devel-0.3.106
- libaio-devel-0.3.106 (32 bit)
- libgcc-4.1.2, libgcc-4.1.2 (32 bit)
- libstdc++-4.1.2
- libstdc++-4.1.2 (32 bit)
- libstdc++-devel 4.1.2
- make-3.81
- sysstat-7.0.2
- unixODBC-2.2.11
- unixODBC-2.2.11 (32 bit)
- unixODBC-devel-2.2.11
- unixODBC-devel-2.2.11 (32 bit)
On sample system with OEL 5.3 and default + development packages installed only the following rpms were missing:
rpm -ihv libaio-devel-0.3.106-3.2.* libstdc++43-devel-4.3.2-7.el5.* sysstat-7.0.2-3.el5.x86_64.rpm unixODBC-2.2.11-7.1.* unixODBC-devel-2.2.11-7.1.*
Shell Limits
/etc/security/limits.conf
grid soft nproc 16384
grid hard nproc 16384
grid soft nofile 65536
grid hard nofile 65536
grid soft stack 10240
grid hard stack 10240
In /etc/pam.d/login add if not exists
session required pam_limits.so
Kernel Limits (MINIMUM values) in /etc/sysctl.conf
kernel.sem=250 32000 100 128
kernel.shmall=2097152
kernel.shmmax=536870912
kernel.shmmni=4096
fs.file-max=6815744
fs.aio-max-nr=1048576
net.ipv4.ip_local_port_range=9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
– SuSE only –
vm.hugetlb_shm_group=<gid of osdba group>
The values in /etc/sysctl.conf should be tuned (i.e. according to the number of instance, available memory, number of connections,…)
Kernel Limits on Linux (Calculate them)
kernel.sem
semmns = Total number of semaphores systemwide =
2 * sum (process parameters of all database instances on the system)
+ overhead for background processes
+ system and other application requirements
semmsl = total semaphoren for each set
semmni = total semaphore sets = semmns divided by semmsl, rounded UP to nearest multiple to 1024
kernel.sem = <semmsl semmns semopm semmni>
semmsl = set to 256
semmns = set total number of semaphoren (see above!)
semopm = 100; in documentation not explicitly described
semmni = see calculcation above
kernel.shmall
kernel.shmall = This parameter sets the total amount of shared memory pages that can be used system wide. Hence, SHMALL should always be at least ceil(shmmax/PAGE_SIZE). PAGE_SIZE is usually 4096 bytes unless you use Big Pages or Huge Pages which supports the configuration of larger memory pages. (quoted from: www.puschitz.com/TuningLinuxForOracle.shtml)
kernel.shmmax
kernel.shmmax = the maximum size of a single shared memory segment in bytes that
a linux process can allocate
If not set properly database startup can fail with:
ORA-27123: unable to attach to shared memory segment
kernel.shmmni
kernel.shmmni = system wide number of shared memory segments; Oracle recommendation for 11g Release 1 “at least to 4096”; i did not found anything for Release 2….
fs.file-max
fs.file-max = maximum number of open files system-wide; must be at least %G„%@6815744$(B!H(B
fs.aio-max-nr
fs.aio-max-nr = concurrent outstanding i/o requests; must be set to %G„%@1048576$(B!H(B
net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = mimimum and maximum ports for use; must be set to minimal “9000” and “65500” as maximum
net.core.rmem_default
net.core.rmem_default = the default size in bytes of the receive buffer; must be set at least to “262144”
net.core.rmem_max
net.core.rmem_max = the maximum size in bytes of the receive buffer; must be set at least to “@4194304”
net.core.wmem_default
net.core.wmem_default = the default size in bytes of the send buffer; must be set at least to “262144”
net.core.wmem_max
net.core.wmem_max = the maximum size in bytes of the send buffer; must be set at least to “1048576”
Networking
Basic facts
- Works completely different than 10g or 11g R1!
- At least two separated networks (public and private) and therefore two network interfaces required
- ATTENTION: Interface names must be equal on ALL nodes! (i.e. If private network interface on node A is eth2 the private network interface name on all other nodes must be eth2 as well…. )
- Recommendation: Use bonding for:
- Static naming (even if you use only one interface per bond)
- Failover / Load Sharing
- –> we will use network bonding with only one interface in the following
- IP adresses can be given by two schemes:
- GNS (grid naming service) –> automatic ip numbering
- Manual Mode
- –> we will use manual ip adressing mode in the following
- GNS mode requires:
- one fixed public IP for each node
- one dhcp virtual IP for each node
- one hdcp for fixed private IP for each node
- three dhcp IP for the SCAN
- Thougths by the author:
- new
- more complex
- if working quite easy adding of an node; at least from the ip numbering point of view %G–%@ but how often do you add a node?
- Manual Mode ip adressing requires:
- one public IP for each node
- one virtual IP for each node
- one private IP for each node
- one to three (recommended) IPs for providing the SCAN name
Naming schema used in the following (remember: 2-node-cluster)
Configure Network Bonding
In /etc/modprobe.conf add line:
alias bond0 bonding
alias bond1 bonding
options bonding miimon=100 mode=1 max-bonds=2
(“mode=1” means active/passive failover… see “bonding.txt” in kernel sources for more options)
/etc/sysconfig/network-scripts/ifcfg-bond0 looks like:
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.180.0
NETMASK=255.255.255.0
IPADDR=192.168.180.10
USERCTL=no
/etc/sysconfig/network-scripts/ifcfg-eth0 looks like:
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=yes
(Note: Add a second interface to achive real fault tolerance…. for our testing environment we use bonding to provide a consistent name schema)
The configuration for bond1 is not shown… just alter interface names and IPs.
Configure NTP
Grid Infrastructure provides ntp-like time synchronization with “ctss” (cluster time synchronization service) ctssd is provided in case connections to ntp servers are not possible
If no running (“chkconfig ntpd off” and configured “rm /etc/ntp.conf” ntpd is found ctssd will be used; if ntpd is found ctssd will start in observer mode.
ATTENTION: Set the “-x” flag if you use ntp to prevent ntp from stepping the clock in /etc/sysconfig/ntpd!
Check if NTP is working
- start “ntpq”
- enter “opeer” to see list of all peers
In our example two peers: host “nb-next-egner” and the local clock
enter “as” to see associations
“sys.peer” means the clock is synchronized against this; the order in which the entries apper is like “opeer” – so first entry means host “nb-next-egner” – fine!
reject means not synchronized against due to various reasons
enter “rv” for detailed information
SCAN
- SCAN = Single Client Access Name; new concept in 11g R2
- DNS-based
- nameing notation: <name of cluster>-scan.<domain>
- for our cluster named “rac” with domain “regner.de” this is rac-scan.regner.de
- You need at least ONE – better three IPs for the new database access schema called SCAN
- IPs are configured in DNS (forward and reverse lookup);
- !! using local hosts file failed verification after grid installation !!
- forward- and reverse lookup needs to be configured
- excerpt from zone file:
rac-scan IN A 192.168.180.6
rac-scan IN A 192.168.180.7
rac-scan IN A 192.168.180.8
After installation we will find three listeners running from grid infrastructure home:
bash# srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac1
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node rac2
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node rac2
Connection to database “RAC11P” using SCAN would use this tnsnames entry:
RAC11P =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=rac-scan.regner.de)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=RAC11P))
)
The “old fashioned” way still works:
RAC11P_old =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=rac1-vip.regner.de)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=rac2-vip.regner.de)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=RAC11P))
)
Connecting to a named instance:
RAC11P =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=rac-scan.regner.de)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=RAC11P)
(INSTANCE_NAME=RAC11P1))
)
Check DNS for SCAN
Update [16th October 2009]: If you do not have a working DNS server available refer here to set up your own,
Forward lookup
Use “dig” to check: “dig rac-scan.regner.de”
Reverse lookup
Use “dig -x” to check
dig -x 192.168.180.6
dig -x 192.168.180.7
dig -x 192.168.180.8
Create User and Group
Create Group
groupadd -g 500 dba
Note: For educational purposes we use only one group. In productive enviroments there should be more groups to separate administrative duties.
Create User
mkdir -p /u01/app/11.2.0/grid
chown -R root:dba /u01
chmod -R 775 /u01
chown -R grid:dba /u01/app/11.2.0/grid
useradd -g dba -u 500 -d /u01/app/11.2.0/grid grid
passwd grid
Note: Oracle recommends different users for grid and database installation!
Make sure groupid and userid are the same on ALL nodes!
Create profile file (~/.bash_profile or ~/.profile on SuSE) for user “grid”
umask 022
if [ -t 0 ]; then
stty intr ^C
fi
Prepare and Configure Storage
- Requirements
- must be visible on all nodes
- as always – recommendation: SAME (stripe and mirror everything)
- What to store where:
- OCR and Voting disk
- ASM
- NFS
- RAW disks (deprecated; read doucmentation!)
- Oracle Clusterware binaries
- NFS
- Local disk
- Oracle RAC binaries
- ACFS
- NFS
- local disk
- Oracle database files
- ASM
- NFS
- RAW disks
- Oracle recovery files
- ASM
- NFS
- OCR and Voting disk
Install RPMs
- oracleasmsupport
- oracleasmlib
- oracleasm-<kernel-version>
(see “Sources” for download locatio)
Configure ASM
/usr/sbin/oracleasm configure -i
init ASM
/usr/sbin/oracleasm init
Create Partitions on disk with fdisk
In the following example disk /dev/sde (this is our iSCSI storage) does not contain a partition at all – we will create one
Create one whole disk partition on /dev/sde
Label all disks with asm label
Query disks on all nodes – Node “rac1”
–> all disks visible with correct label
Query disks on all nodes – Node “rac2” (the other node)
–> also all four LUNs visible
OCR and Voting disks
- Will be placed in ASM (new in 11g R2)
- three different redundancy levels:
- External – 1 disk minimum needed
- Normal – 3 disks minumum needed
- High – 5 disks minimum needed
- Storage Requirments
- External – 280 MB OCR + 280 MB Voting Disk
- Normal – 560 MB OCR + 840 MB Voting Disk
- High – 840 MB OCR + 1,4 GB Voting Disk
- plus Overhead for ASM Metadata
Overhead for ASM metadata
total =
[2 * ausize * disks]
+ [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]
redundancy = Number of mirrors: external = 1, normal = 2, high = 3.
ausize = Metadata AU size in megabytes.
nodes = Number of nodes in cluster.
clients – Number of database instances for each node.
disks – Number of disks in disk group.
For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require 1684 MB of space for ASM metadata
[2 * 1 * 3]
+ [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)]
= 1684 MB
OCR and Voting disks – recommendations
- use high redundancy for OCR and Voting disks – the correct function of your cluster depends on it!
- use 5 disks with 10 GB each – enough space for all files plus asm metadata plus space for futher growth
Checklist
- Storage visible
- user and groups created
- Kernel parameters configured
- RPM Packages checked / installed
- NTP working
- DNS working
- Connection (ping, ssh) between nodes working?
- Backup available for rollback?
–> Alright! Lets start binary installation
Installing
Start installation as user “grid” (on one node (here on node “rac1”))
Remember: We choose not to use GNS; so it is deselected
The node the installer was started is already added by default; add here all other nodes (in our case we added “rac2”)
Click on “SSH Connectivity”, enter username and password and click on “Setup”
If everything worked the following message appears
If there are problems check:
- Group ID and User ID on both nodes
- Connectivity between both nodes
- Passwords
Select which interface is the public and which the private one
Where to place OCR and Voting disk… in our case we use ASM for everything
For storing OCR and Voting disk we need to create a data group; our first data group is called “DATA1” and consists of the four LUNs we prepared and labeled before… here we see the disk names we labeled the disks with again. We choose “normal” redundancy which will create a mirror.
Specify passwords for ASM and ASMSNMP.. choose strong passwords if possible (i was lazy and chose not that strong ones – acceptable for educational purposes but not in real productive scenarios)
Grid Infrastructure can use IPMI for fencing… VMWARE does not have IPMI
group mapping…for role separation… we have only “dba” ? change accordingly to your needs
Set Oracle Base and software (install) location… software location must not be under oracle base location… else the installer throws an error saying so
Make sure you fix every issue reported here (memory and swap size are limited on virtual machine so this is not fixable…but should anyway)
Post-Installation scripts to be started in the following order:
- orainstRoot.sh on node rac1
- orainstRoot.sh on node rac2
- root.sh on node rac1
- root.sh on node rac2
Sample of root.sh output
The full output can be found rac-install-node1 and rac-install-node2.
If everything works as expected the result should look like this:
If there are problems:
- Look at the log files located on /u01/app/oraInventory/logs
- Fix the issues noted here
- If this does not work out: Search Metalink / Open SR
Where to go now?
- We just installed the Infrastructure needed for RAC, i.e. ASM and Clusterware
- Install diagnostic utilities (strongly recommended)
- OSWatcher
- and RACDDT
- Tune Kernel parameters (if not done before)
- Create at least TWO more disk groups:
- one for holding database files (i.e. datafiles and binary installation files)
- one to be used as flashback recovery area
- Backup current configuration
- some ADVM (ASM Dynamic Volume Manager) and ACFS (ASM Cluster File system) foundations can be found here
- now we need to install a RAC database – this is covered here and here
Pingback: Oracle 11 Release 2 Install Guide – Install Oracle RAC « Ronny Egner's Blog
linux-w0sv:~/Desktop # ls /etc/sysconfig/network-scripts/
ls: cannot access /etc/sysconfig/network-scripts/: No such file or directory
linux-w0sv:~/Desktop # ls /etc/sysconfig/
SuSEfirewall2 dmraid named squid
SuSEfirewall2.d fetchmail network ssh
apache2 fonts-config news storage
arpwatch hardware nfs suse_register
atd icecream ntp suseconfig
auditd ide oracle susehelp
autofs irqbalance oracleasm svnserve
backup ispell oracleasm-_dev_oracleasm sysctl
bluetooth joystick oracleasm-update syslog
boot kernel postfix texlive
bootloader keyboard proxy ulimit
bootsplash language radvd uuidd
clock ldap rpcbind windowmanager
console lirc scpm words
cron lvm scripts xendomains
cups mail security yast2
ddclient mdadm services ypbind
dhcpd mouse shutdown
dhcrelay mpi-selector sound
displaymanager nagios spamd
linux-w0sv:~/Desktop # ls /etc/sysconfig/network/
config dhcp6r if-down.d ifcfg-eth0 ifcfg-lo ifroute-lo scripts
dhcp dhcp6s if-up.d ifcfg-eth1 ifcfg.template providers
linux-w0sv:~/Desktop # ls /etc/sysconfig/scripts/
SuSEfirewall2-batch SuSEfirewall2-open SuSEfirewall2-showlog
SuSEfirewall2-custom SuSEfirewall2-qdisc
SuSEfirewall2-oldbroadcast SuSEfirewall2-rpcinfo
linux-w0sv:~/Desktop #
its not mattiching in open SUSE 11.1.
> Its not matching for open SuSE 11.1
Yes there are differences between SuSE and OEL. The guide is specific to Oracle Enterprise Linux. If you use SuSE you have to know the differences between SuSE and RedHat/OEL. By the way: open SuSE is NOT certified for oracle at all. So you will face problems.
OCR needs a bareminimum ASM 840MB and voitng disk need bareminimum 1.4GB ASM space.is it so?is it mandatory to have iscsi configured?
According to the documentation (http://download.oracle.com/docs/cd/E11882_01/install.112/e10812/storage.htm#CDECEBGH) at least 280 MB are required for each voting disk or OCR.
Regarding your iSCSI question: No, it is not required. For RAC you need some kind of shared storage. iSCSI is a shared storage. You can also use SAN or even NFS implementation. In my tests i use iSCSI because i do not have a SAN equipment available.
is the Single Scan Access name mandetory also?
The SCAN name is required for rac 11g r2 onwards.
Ronny,
I am facing some issues regarding the disk creation. I am using OEL5.3 X86_64 bit. I did add a seperate hard disk on my VM installation. After that I created the partitions over that newly added using FDISK. I could see the partitions. I installed the correct ASMLIB driver and it got installed as well. But I tried to create the disk using the asmlib, it raised error that the disk creation is failed. I am not sure what I am missing? Please help me in getting it sorted out.
Aman….
Hi Aman,
can you send me the following output (please reboot the system before doing so):
fdisk -l
What was the command you used to create/label the ASM disk?
Hi Ronny,
Thanks for the reply. I have managed to create the disks but I am not able to see them when I am installing the Grid Infrastructure. Here is my asmlib module and disks status,
[root@localhost ~]# oracleasm listdisks
ASMDISK1
ASMDISK2
ASMDISK3
ASMDISK4
ASMDISK5
ASMDISK6
ASMDISK7
[root@localhost ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
[root@localhost ~]#
I am not able to see the disks when I am running the installer of ?Grid Infrastructure. I am not able to see the disks.
Thanks and regards
Aman….
Hi Aman,
i assume this is a cluster installation. Can you see all disks on all of your nodes? One reason for this might be asmlib is looking at the wrong physical device names. Are the device names “/dev/sd?” or something else like “/dev/emcpower?” ?
hi
is it necessory to forming the network bonding?
i am confused here a bit…..
regards
Network bonding is NOT neccessary. Network bonding is recommended for two reasons:
1. Failover (if one interface fails the remaining interface will be used). This will prevent you cluster node from rebooting.
2. In a cluster the network interface names accoss ALL NODES must be equal (e.g. if eth0 is used for interconnect it must be so on all nodes in your cluster). Without bonding your interface names might change during reboots or hardware maintenance (new NIC) if not configured properly. With bonding you are in control which interface belongs to which bonding device.
hi
i love to play with it lot.so i installed 11gr2 db version first.then i deleted the db orcl using dbca.NOW i am trying to install RAC..if i install RAC and create a db or restore a db from taking backup backup from a 10gR2 single instance db using rman.what would be the case?
will this db use the all the fetures of RAC 11gR2 or it will be just a single instance db by default OR do i really need to run rconfig? or whatelse….
plz suggest
According to the documentation changes in these files are not neccessary.
The most simple step would be to create a database with DBCA. If you want to migrate your data you have several options: Export/Import (either with Data Pump or “old” exp/imp), Upgrade your 10g DB with DBUA and so on. Best would be to use export/import if you have a small amount of data.
hi
/etc/pam.d/login
/etc/pam.d/su
/etc/pam.d/sshd
/etc/pam.d/xdm
/etc/pam.d/gdm
/etc/pam.d/kde
do you suggest any change here too?i am a newbi plz help…..
regards
@Ronny Egner
hi
its me again.when i am running runcluvfy its giving
/runcluvfy.sh stage -pre crsinst -n rac-1,rac-2 -fixup -verbose
Performing pre-checks for cluster services setup
Checking node reachability…
Check: Node reachability from node “rac-2”
Destination Node Reachable?
———————————— ————————
rac-2 yes
rac-1 yes
Result: Node reachability check passed from node “rac-2”
Checking user equivalence…
Check: User equivalence for user “oracle1”
Node Name Comment
———————————— ————————
rac-2 failed
rac-1 failed
Result: PRVF-4007 : User equivalence check failed for user “oracle”
ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed
Pre-check for cluster services setup was unsuccessful on all the nodes.
[oracle@rac-2 grid]$
1)how it could be dissolved ?
2)installer is reporting fine for single node.if so can i add another later?
3)do i need ssh configuration correct?
regards
@Ronny Egner
how you exported it via iscsi without using openfiler ?
regards
Actually it is exported from a Netapp Filer. You have to read the docs on how to export via iSCSI with openfiler.
@Ronny Egner
is SCSI / SAN mandatory?actually this SCSI and SAN disks looks hazy for me…
iSCSI or even SAN is not mandatory. All you need is some kind of shared storage (i.e. access must be possible from all nodes). The most simple way to do this is via NFS.
is it possible to mix NFS with ASM?i think so.can you show the way?
I outlined how to do this in this posting here: http://blog.ronnyegner-consulting.de/2009/09/21/using-asm-with-local-disk-files
Dont get me wrong, but i feel but before tying GFS or ASM-over-NFS you should install on plain NFS. This removes a lot of complexity.
@Ronny Egner
Thanx.
NFS should be available to clusterware…
how to offer it to clusterware?Thats why i wanted to mix it.
regards
From pre-11g R2 experiences you have to:
1. Mount NFS with correct options (see install guide; they change from version to version)
2. When being asked for OCR and Voting disk just enter /
what is node eviction problem?
Node eviction means to fence out nodes from the cluster, to remove them in order to ensure cluster stability and data security.
There are serveral reasons for node eviction:
– Lost connection between the nodes in terms of network or storage connection.
– System crashes
– Too high load
i have JUST configure iscsi by tgtadm ie. without using netapp filer or openfiler now for a node:
0)do i need another node to configure the same?
1)what if i offer a raw disk to iscsi will it be supported by 11gR2?do i offer it through ext3?
2)do i need another node to do the same?ie. tgtadm and iscsiadm ?
3)still i need nfs?
regards
You basically exported your storage via iSCSI from one node. Thats fine. For a cluster you need the SAME disks visible to ALL nodes in your cluster. So you need to export your disks via iSCSI to all of your nodes.
Your current configuration lacks high availability. For learning thats fine.
0)do i need another node to configure the same?
I dont understand the question.
1)what if i offer a raw disk to iscsi will it be supported by 11gR2?do i offer it through ext3?
You can offer a raw disk directly to the clusterware. But you have to offer at least two disks (one disk OCR and one disk Voting disk).
Instead of using raw disks directly use ASM which is more flexible.
2)do i need another node to do the same?ie. tgtadm and iscsiadm ?
I dont understand the question. You have to export your disks via iSCSI to all of your nodes.
The node exporting your storage is your single-point-of-failure.
3)still i need nfs?
No. iSCSI + ASM is fine.
[root@rac-1 ~]# (cd /dev/disk/by-path; ls -l *sayantan-chakraborty* | awk ‘{FS=” “; print $9 ” ” $10 ” ” $11}’)
ip-192.168.181.10:3260-iscsi-iqn.2010-02-23.de.sayantan-chakraborty:storage.disk1.amiens.sys1.xyz-lun-1 -> ../../sdc
[root@rac-1 ~]# oracleasm createdisk volume6 ip-192.168.181.10:3260-iscsi-iqn.2010-02-23.de.sayantan-chakraborty:storage.disk1.amiens.sys1.xyz-lun-1
Unable to query file “ip-192.168.181.10:3260-iscsi-iqn.2010-02-23.de.sayantan-chakraborty:storage.disk1.amiens.sys1.xyz-lun-1”: No such file or directory
[root@rac-1 ~]# oracleasm createdisk volume6 ../../sdc
Unable to query file “../../sdc”: No such file or directory
[root@rac-1 ~]# oracleasm createdisk volume6 /dev/sdc
Device “/dev/sdc” is not a partition
[root@rac-1 ~]#
Something is wrong with your disks.
What is the output of “fdisk -l”?
For creating disks see the example in my documents.
From memory: “oracleasm createdisk DISK1 /dev/sdc”
hi
how to export it? editing /etc/exports or using iscsi initiator in different node?
need to create a new partition on that disk i think so.
[root@rac-1 ~]# fdisk /dev/iscsi/xyz/part
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won’t be recoverable.
The number of cylinders for this disk is set to 10236.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition’s system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): p
Disk /dev/iscsi/xyz/part: 10.7 GB, 10733958144 bytes
64 heads, 32 sectors/track, 10236 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Command (m for help): q
[root@rac-1 ~]#
how it could be converted to high availability?
NFS exports are configured in /etc/exports. The way you are exporting iSCSI LUNs depend on the software you are using. So without more information i cannot help here.
is it possible to do the same by replacing clusterware of oracle by other 3rd party clusterware eg. globus toolkit?
As far as i know it is possible to use other clusterwares beside oracle clusterware. You have to check the documentation for that. Anyway: Why would anyone do this? Clusterware itself is free of charge and replacing the clusterware will most probably cause a lot of problems.
scsiadm -m discovery -t sendtargets -p rac-1-priv
iscsiadm -m discovery -t sendtargets -p rac-1
iscsiadm -m node -T iqn.2010-02-23.de.sayantan-chakraborty:storage.disk1.amiens.sys1.xyz -p 192.168.181.10 -l
iscsiadm -m node -T iqn.2010-02-23.de.sayantan-chakraborty:storage.disk1.amiens.sys1.xyz -p 192.168.181.10 –op update -n node.startup -v automatic
(cd /dev/disk/by-path; ls -l *sayantan-chakraborty* | awk ‘{FS=” “; print $9 ” ” $10 ” ” $11}’)
i am using iscsiadm and tgtadm of redhat …..
problem is that i need to use a shell script of those command each time… can i do it on booting by editing /etc/exports?
may i know why you exported 2*30 GB LUN by iSCSI?
> problem is that i need to use a shell script of those command each time… can i do it on booting by editing /etc/exports?
Not in /etc/exports… thats for NFS mounts only.
Check if there is a script shipped with Red Hat in /etc/init.d… if there is one all you need to do is to activate it.
> may i know why you exported 2*30 GB LUN by iSCSI?
Because i needed some storage. I could have also chosen 2*40 GB or 4*15GB… it does depend on your requirements.
>Because i needed some storage. I could have also chosen 2*40 GB or 4*15GB… it does depend on your requirements.
is it necessary that this storage should be exported by iscsi?whats the purpose of this storage?is this for DB files?or for clusterware?
> is it necessary that this storage should be exported by iscsi?whats the purpose of this storage?is this for DB files?or for clusterware?
For running a cluster you need some kind of shared storage for your database files (datafiles, redo logs, archive log, control file) and for the voting disks and OCR.
There are several technologies available which offer this kind of shared storage: iSCSI (as i used here), SAN or NFS.
i have seen hunters page.but he did not made 2*30 GB on SCSI he just made ASM volumes.is that enough?
OR its also required and mandatory if i am using ASM also??
The guide is based on the usage of ASM. ASM takes shared disks offered and creates disk groups. These dis groups can be used to store data in. The iSCSI disks are a type of shared shorage. The size depends on your requirements. For a short test 2*30 GB were more than enough. For larger database you should use larger disks. Make sure you read the ASM best practise guide before.
do i need to create bond for VIP and SCAN also??
i suspect its not req bcz clusterware will bring them up?am i right?
You have two interfaces: one public (for host ip, all the scan IPs and the vip) and one private interface. These two interfaces will be most likely bonded interfaces, hence the name “bond0” or “bond1” and so on.
The clusterware (aka “grid infrastructure”) will bring up virtual interfaces themself if needed.
r u telling to the /u01 to put on scsi to share among nodes??
No. /u01 is a local disk on each node. In “/u01/app/grid” the grid infrastructure is installed. This has to be done on each node. Note that you *cannot* share the grid installation among nodes.
Afterwards we created an ASM instance and used the iSCSI disks (= shared storage) for that. By using ADVM and ACFS we created a shared disk for our database installation and mounted it on “/u01/app/oracle/product/11.2.0/ora10p” (if i remember the paths correctly).
you used 2*30 gb + ASM thats why i am a bit confused.
if i understand you properly ASM is enough for installation.after installing clusterware i need to make ACFS then probably i need that 2*30GB SCSI?if i am wrong please correct me ….
For configuring ASM you need some kind of disks because asm is basically a volume manager which deals with disks. You need the shared storage (or “disks” as you call them) at the grid infrastructure installation.
@Ronny Egner
/etc/init.d/iscsi-volume is an empty file….
Strange. Then i guess you have to write the init script yourself.
i have created the bond as shown by u.problem is that its not pinging to google via my ADSL modem…
what to do?
do i need IPADDR= or GATEWAY= ??Or what else?
Check for your default gateway….
do i need to set each ethernet as slave to the rac-1 bond??
No. Slave means in terms of bonding: “what *physical* interfaces consist the interface of”. So eth0 and eth2 might form bond0 while eth1 and eth3 form bond1.
hi
my question may look simple and easy to you.
my question is how to create bond as eth0 and eth1 sre in two different ip/subnet for a perticular node?
do i need to run clusterware in each node?
regards
You cannot name a network bond “eth0” or “eth1”. These names are reserved for physical devices. Note that for high availability you need at least two physical interfaces per bond.
hi
how to create bond0 and bond1 for RAC installation between two nodes??
node1
[grid1@rac-1 ~]$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:00:E8:F7:02:B0
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::200:e8ff:fef7:2b0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:20571 errors:0 dropped:0 overruns:0 frame:0
TX packets:19190 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10508240 (10.0 MiB) TX bytes:9218545 (8.7 MiB)
Interrupt:177 Base address:0x8400
eth1 Link encap:Ethernet HWaddr 00:26:18:59:EE:49
inet addr:192.168.181.10 Bcast:192.168.181.63 Mask:255.255.255.192
inet6 addr: fe80::226:18ff:fe59:ee49/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:49 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:23
collisions:0 txqueuelen:1000
RX bytes:18391 (17.9 KiB) TX bytes:0 (0.0 b)
Memory:fe9c0000-fea00000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:16332 errors:0 dropped:0 overruns:0 frame:0
TX packets:16332 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16738217 (15.9 MiB) TX bytes:16738217 (15.9 MiB)
virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:10062 (9.8 KiB)
[grid1@rac-1 ~]$
node2
[root@rac-2 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:1D:72:39:3A:E4
inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::21d:72ff:fe39:3ae4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6765 errors:0 dropped:0 overruns:0 frame:0
TX packets:7040 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3922791 (3.7 MiB) TX bytes:1079834 (1.0 MiB)
Interrupt:169
eth2 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.168.181.20 Bcast:192.168.181.63 Mask:255.255.255.192
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:7029 (6.8 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:4970 errors:0 dropped:0 overruns:0 frame:0
TX packets:4970 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:7648305 (7.2 MiB) TX bytes:7648305 (7.2 MiB)
virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:54 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:10706 (10.4 KiB)
[root@rac-2 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:1D:72:39:3A:E4
inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::21d:72ff:fe39:3ae4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6767 errors:0 dropped:0 overruns:0 frame:0
TX packets:7044 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3923027 (3.7 MiB) TX bytes:1080210 (1.0 MiB)
Interrupt:169
eth2 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.168.181.20 Bcast:192.168.181.63 Mask:255.255.255.192
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:37 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:9982 (9.7 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:4970 errors:0 dropped:0 overruns:0 frame:0
TX packets:4970 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:7648305 (7.2 MiB) TX bytes:7648305 (7.2 MiB)
virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:54 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:10706 (10.4 KiB)
[root@rac-2 ~]#
i have done till now….
Configure Network Bonding
In /etc/modprobe.conf add line:
alias bond0 bonding
alias bond1 bonding
options bonding miimon=100 mode=1 max-bonds=2
(mode=1? means active/passive failover see bonding.txt in kernel sources for more options)
/etc/sysconfig/network-scripts/ifcfg-bond0 looks like:
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.181.10
NETMASK=255.255.255.192
IPADDR=192.168.181.20
USERCTL=no
/etc/sysconfig/network-scripts/ifcfg-eth0 looks like:
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=yes
1)what to do get my access to adsl modem?
2)what to do on node2?
i am facing some error when i am installing clusterware bcz the interface names are different in each node.and public and private ip NICs are in different order.
will this bonding solute out this issue?
regards
Hi,
access to the internet requires some kind of default gateway and a dns server configured (for resolving host names).
You configuration look quite right.
On Node 1 the configuration files should state:
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.181.10
NETMASK=255.255.255.192
IPADDR=192.168.181.20
USERCTL=no
DEVICE=bond1
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.1.255
NETMASK=255.255.255.0
IPADDR=192.168.1.3
USERCTL=no
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=yes
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond1
SLAVE=yes
USERCTL=yes
On Node 2:
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.181.11
NETMASK=255.255.255.192
IPADDR=192.168.181.20
USERCTL=no
DEVICE=bond1
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.1.255
NETMASK=255.255.255.0
IPADDR=192.168.1.4
USERCTL=no
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=yes
DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond1
SLAVE=yes
USERCTL=yes
Using bonding will definitely solve your interface naming problem.
@Ronny Egner
where to check that?i have already added GATEWAY=192.x.x.x in ifcfg-bond0 file….
regards
Use “netstat -n” to check if there are any routes. In addition to that check if you can ping a server on the internet.
Now getting this….
INS-32012,INS-32012,INS-20702
Cause – A call to cluster verification utility failed. Action – Refer to the logs or contact Oracle Support Services. Note for advanced users: Launch the installer by passing the following flag ”-ignoreInternalDriverError”.
I have never seen this error before. At what stage do you observe this problem? What is the configuration?
What is the output of the cluster verification utility?
cluvfy is running fine it states as successful.
step 12 of 16 i get this error.
Sorry but “step 12 of 16” does not mean anything to me. Please review your log files for any errors. You can find them in the oraInstall\log directory.
this is without vmware installation.so 2gb ram 240gb hdd 4 lun each 10.5gb on ASM shared by iscsi ie tgtadm.
@Ronny Egner
INFO: Checking specified location on remote nodes…
INFO: size estimation for Completeinstall is 3017.24600315094
SEVERE: [FATAL] [INS-32012] Unable to create directory.
CAUSE: Either proper permissions were not granted to create the directory or there was no space left in the volume.
ACTION: Check your permission on the selected directory or choose another directory.
SEVERE: [FATAL] [INS-32012] Unable to create directory.
CAUSE: Either proper permissions were not granted to create the directory or there was no space left in the volume.
ACTION: Check your permission on the selected directory or choose another directory.
SEVERE: [FATAL] [INS-32052] Oracle base and Oracle home locations are same.
CAUSE: Oracle base and Oracle home locations were same.
ACTION: Oracle base and Oracle home locations cannot be same. Oracle recommends installing Oracle software within the Oracle base directory. Adjust the Oracle home or Oracle base accordingly.
SEVERE: [FATAL] [INS-32026] The Software Location specified should not be under Oracle base location.
CAUSE: Grid Infrastructure for a Cluster installation will assign root ownership to all parent directories of clusterware software location. As a result, all named directories in the software location path will acquire root ownership. This may create problems for subsequent installations into the same Oracle base.
ACTION: Specify software location outside of Oracle base.
SEVERE: [FATAL] [INS-20702] Unexpected internal driver error
CAUSE: A call to cluster verification utility failed.
ACTION: Refer to the logs or contact Oracle Support Services. Note for advanced users: Launch the installer by passing the following flag ”-ignoreInternalDriverError”..
Refer associated stacktrace #oracle.install.commons.util.exception.DefaultErrorAdvisor:1740
INFO: Advice is ABORT
INFO: Adding ExitStatus INVALID_USER_INPUT to the exit status set
INFO: Completed validating state
INFO: Terminating all background operations
INFO: Terminated all background operations
INFO: Transition aborted
INFO: Rolled back to state
INFO: Pause Oracle Grid Infrastructure
INFO: Resume Oracle Grid Infrastructure
INFO: Finding the most appropriate exit status for the current application
INFO: Exit Status is -2
INFO: Shutdown Oracle Grid Infrastructure
Reading the log file some line are most probably pointing to the error:
SEVERE: [FATAL] [INS-32012] Unable to create directory.
SEVERE: [FATAL] [INS-32052] Oracle base and Oracle home locations are same.
Another thing i want to clarify do i need any separate interface to connect private and vertual network?
You need two physical network interfaces. One for the interconnect and the other for your public and virtual host IPs (including SCAN and VIP IPs).
details of the directory ownership etc.
[root@rac-1 yum.repos.d]# cd /u01
[root@rac-1 u01]# ls -al
total 48
drwxrwxr-x 8 504 500 4096 Feb 22 15:29 .
drwxr-xr-x 30 root root 4096 Apr 26 19:05 ..
drwxrwxr-x 7 root root 4096 Apr 11 21:53 app
drwxrwxr-x 2 504 500 16384 Jan 26 12:44 lost+found
drwxrwxr-x 4 504 500 4096 Feb 4 19:36 sql-dev
drwxrwxr-x 5 504 504 4096 Feb 15 03:51 .Trash-grid
drwxrwxr-x 5 504 500 4096 Feb 3 20:40 .Trash-oracle
drwxrwxr-x 3 root root 4096 Feb 22 15:29 .Trash-root
[root@rac-1 u01]# ls -al /app
ls: /app: No such file or directory
[root@rac-1 u01]# ls -al app
total 28
drwxrwxr-x 7 root root 4096 Apr 11 21:53 .
drwxrwxr-x 8 504 500 4096 Feb 22 15:29 ..
drwxrwxr-x 5 root root 4096 Apr 11 21:53 11.2.0
drwxrwxr-x 2 root root 4096 Apr 11 21:53 grid1
drwxrwxr-x 2 500 501 4096 Feb 16 17:35 oracle
drwxrwxr-x 2 oracle1 oinstall 4096 Apr 11 21:31 oracle1
drwxrwxr-x 4 oracle1 oinstall 4096 Feb 17 16:54 oraInventory
[root@rac-1 u01]# ls -al app/11.2.0
total 20
drwxrwxr-x 5 root root 4096 Apr 11 21:53 .
drwxrwxr-x 7 root root 4096 Apr 11 21:53 ..
drwxrwxr-x 2 504 501 4096 Feb 16 17:34 grid
drwxrwxr-x 2 root root 4096 Apr 11 21:53 grid1
drwxrwxr-x 2 504 501 4096 Feb 17 15:35 oraInventory
[root@rac-1 u01]#
Fix the root:root permissions and i am wondering why i see numerical user IDs instead of the owner….?
cluster configuration failed now i need to setup asmca netca etc by my own can you give any pointer or link?
Once an assistant failed it is not suitable to run them manually. Check the error logs, fix the problems and restart the installation is my recommendation,
Hi
Great blog!
Thank you for responding to my thread at technet forum. That’s where I got the URL of your blog.
I have done a similar install, but I have used different groups for oracle and grid users.
oracle is a member of oinstall, dba and asmdba
grid is a member of oinstall, asmdba and asmadmin groups.
I had issues with privileges when writing to diag directory which I resolved by having different ORACLE_BASE for grid and oracle user.
I am also not sure if running a listener as a grid user causes any issues. An issue that I found so far is that users who are not oracle or grid cannot establish local BEQ connections to an instance. I logged an SR for this, but Oracle support was unable to help, I assume they just have a basic install where everything is installed as oracle so they could not reproduce my problem.
I wander if it is worth installing as two separate users if the same team looks after the grid infrastructure and RAC? I would like to know what do other people do. I did not have a chance to talk to anyone who is actually using 11gR2 in production.
Thanks
Hi,
i always separate database from grid installation – just as oracle recommends.
Regarding directory permissions i usually set them on installation to 775 (oracle:dba or grid:dba) and tighten them after installation.
The same is true for RAC installation.
Free free to ask if there are more questions.
hi
i have one or two questions.May i ??
what is multi-path?how it is helpful to grid??any doc link could be helpful….
is it possible to configure multi-path on just two C2D machine.
how to install clusterware if the multipath is configure?will the ASM take care of multi path??
regards
> what is multi-path?how it is helpful to grid??any doc link could be helpful….
Multipathing is a type of connection redundancy often used in SAN or network environments where you have at least two redundant ways from your server to your san-box or switch. This involved two network cards in your server, two switches and so on. Everything is there twice. If one component fails there is no interruption in service because the other path is used.
Often multi-pathing is combined with some sort of load-balancing so increase bandwidth. But that depends.
Multipathing is configured at operating system leven and has nothing to do with the clusterware itself. In my documents i already describe network bonding which – with two interfaces – can also be seen as a form of multipathing.
In SAN environments you have to a vendor multipathing software or the linux multipathing software “dm”.
and how it is configured with linux massive parallel processing??
> and how it is configured with linux massive parallel processing??
Sorry but i dont understand your question.
@Ronny Egner
ie. bcz i have to make equivalence user.so i deleted user …. but their id is there…some permission and folder is not changed…..
i have a changed the settings now
thx r
regards
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/pdf/DM_Multipath.pdf
its looked clumsy for me.
can you refer an easy pointer for editing multipath.conf??
You find examples in the man page.
Another post can be found here: http://blog.ronnyegner-consulting.de/2009/12/09/persistent-device-naming-or-binding-for-running-rac-on-linux-10g-r2-clusterware-and-above/#more-1442
currently getting a failure related to ASM PRV-9994 for my second node
what to do?
Device Checks for ASM – This is a pre-check to verify if the specified devices meet the requirements for configuration through the Oracle Universal Storage Manager Configuration Assistant.
Check Failed on Nodes: [rac-2] Check Succeeded On Nodes: [rac-1]
Verification result of failed node: rac-2
Expected Value
: n/a
Actual Value
: n/a
List of errors:
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdb9” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdb8” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdb7” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdb6” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdb5” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdb4” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdb3” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdb2” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sda7” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sda6” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sda5” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdh” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sda4” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
–
PRVF-9994 : Owner, group, permission information could not be obtained for device(s) “/dev/sdg1” on node “rac-2” – Cause: Owner, group and permission information could not be obtained for devices listed on the nodes listed. – Action: “Ensure that the correct devices were specified and that they are created on the indicated node. Make sure that the path exists and is accessible by the user.
Back to Top
Check on BOTH nodes:
– do the device nodes exist?
– are permissions set accordingly (i.e. writeable for grid installation user)?
@Anonymous
Hi, I encounter the same issue now. Would you please share the resolution to your case?
Hi Ronny Egner
i had gone through the doc.it is excellent.
1) question
do we need to create DNS server on rac node 1.i created on the separate node for my rac 2 node cluster installation.
can you please give some information regarding this please.
thanks
goutham
Your DNS must not be on the RAC itself. Use the hopefully existing DNS server in your company. If there is none – install your own DNS server on another server.