Archive

Archive for September, 2009

Oracle 11g Release 2 – How to enable SSH equivalence

September 30th, 2009 No comments

There is a problem with shared oracle homes on ACFS documented here.

However if you have not configured SSH equivalence before this short guide shows how to do set up ssh equivalence between the nodes “rac1.regner.de” and “rac2.regner.de”:

1. Unpack the database installation files

2. Execute:

<PATH_TO_DATABASE_INSTALLATION>/sshsetup/sshUserSetup.sh
  -user ora11p -hosts "rac1.regner.de rac2.regner.de" -shared

Arguments:
-user: the name of the user for ssh equivalence to be configured
-hosts: space separated list of node names
-shared: indicates a shared oracle home (on acfs or nfs)

3. Check it

/usr/bin/ssh -o FallBackToRsh=no  -o PasswordAuthentication=no 
  -o StrictHostKeyChecking=yes  -o NumberOfPasswordPrompts=0 rac2 /bin/date
/usr/bin/ssh -o FallBackToRsh=no  -o PasswordAuthentication=no
  -o StrictHostKeyChecking=yes  -o NumberOfPasswordPrompts=0 rac1 /bin/date

The two commands above should return the current date. Thats it.

Categories: Oracle 11g Release 2 Tags:

Oracle 11 Release 2 Install Guide – Creating a RAC database

September 30th, 2009 3 comments

This is the final part of the Oracle 11g Release 2 installation guide. At this point you should have:

  • Oracle database and grid infrastructure binaries installed
  • at least ONE or better two disk groups configured for:
    • database and binary files
    • flash recovery area

Our database name will be “ORA11P”.

Preparations

  • create directory for cfgtools
create /u01/app/oracle/cfgtoollogs
mkdir -p /u01/app/oracle/cfgtoollogs
chown root:dba /u01/app/oracle/cfgtoollogs
chmod 775 /u01/app/oracle/cfgtoollogs
  • Check SSH equivalence once again:
    execute as user „ora11p“ (this is the user holding the oracle database binary installation):
/usr/bin/ssh -o FallBackToRsh=no  -o PasswordAuthentication=no 
  -o StrictHostKeyChecking=yes  -o NumberOfPasswordPrompts=0 rac2 /bin/date
/usr/bin/ssh -o FallBackToRsh=no  -o PasswordAuthentication=no 
  -o StrictHostKeyChecking=yes  -o NumberOfPasswordPrompts=0 rac1 /bin/date

Both commands should return the current date. If there are errors like „connection refused“, check:

  • network connectivity between both nodes
  • does a „normal“ SSH connection works?:
    • ssh root@rac1-priv
    • ssh root@rac1
    • ssh root@rac2-priv
    • ssh root@rac2

both commands should prompt for a password.

If SSH was working before and seems “gone” you might face a problem documented here.

  • add ACFS oracle home to cluster registry
$GRID_HOME/bin/srvctl add filesystem -d /dev/asm/ora11p_home-132
  -v ora11p_home -g DATA2 -m /u01/app/oracle/product/11.2.0/ora11p -u ora11p

Arguments:
-d: path of advm volume
-v: name of volume
-m: mount point path

„-u ora11p“ ist important and must match the owner of the database binary installation; else dbca will raise error compaining about missing permissions

Create the database with dbca

Start dbca

step6_010

step6_011

step6_012

step6_013

step6_014

step6_015

step6_016

step6_017

step6_018

step6_019a

step6_019b

step6_020

step6_021

step6_022

step6_023

step6_024

step6_025

step6_026

step6_027

step6_028

step6_029

step6_030

Where to go now?

  • Tune Instance, for instance:
    • memory_target
    • db_writer_processes
    • sessions parameter
  • Configure, perform and test your backup
Categories: Oracle 11g Release 2 Tags:

Oracle 11g Release 2 – SCAN explained

September 30th, 2009 8 comments

Starting with Oracle 11g Release 2 Oracle introduced a new cluster database connection concept: SCAN – “Single Client Access Name”.

SCAN on the server side

When installing a 11g Release 2 grid infrastructure you are asked for the cluster name which will be part of the SCAN. The notation for the SCAN is:

<clustername>-scan.<domain>

For instance if your cluster is named “rac” and the domain “regner.de” the SCAN name will be “rac-scan.regner.de”.

In order to successful install grid infrastructure you need to configure your DNS (hosts file entries will not work!) prior installing grid infrastructure to resolve the name accordingly. Oracle requires at least one, better three IPs configured for the scan name. Your DNS zone file might look like this:

scan_example

 

In the example zone file above we configured three IPs for the scan: 172.23.15.3, 172.23.15.4 and 172.23.15.5.

After installation you will find three listener processes running (separated on all cluster nodes) as shown in the following figure:

scan_example

Listener for SCAN2 and SCAN3 are running on node “rac1” and listener for SCAN1 is running on node “rac2”. For each SCAN listener there will be a dedicated network interface running with the same IP as configured in DNS:

scan_interfaces

Client connections

Connection to database “RAC11P” using SCAN would use this tnsnames entry:

RAC11P =
 (DESCRIPTION=
 (ADDRESS=(PROTOCOL=tcp)(HOST=rac-scan.regner.de)(PORT=1521))
 (CONNECT_DATA=(SERVICE_NAME=RAC11P))
 )

The “old fashioned” way still works:

RAC11P_old =
 (DESCRIPTION=
 (ADDRESS_LIST=
 (ADDRESS=(PROTOCOL=tcp)(HOST=rac1-vip.regner.de)(PORT=1521))
 (ADDRESS=(PROTOCOL=tcp)(HOST=rac2-vip.regner.de)(PORT=1521))
 )
 (CONNECT_DATA=(SERVICE_NAME=RAC11P))
 )

Connecting to a named instance:

RAC11P =
 (DESCRIPTION=
 (ADDRESS=(PROTOCOL=tcp)(HOST=rac-scan.regner.de)(PORT=1521))
 (CONNECT_DATA=(SERVICE_NAME=RAC11P)
 (INSTANCE_NAME=RAC11P1))
 )
Categories: Oracle 11g Release 2 Tags:

Storing Oracle database connection strings in OpenLDAP instead of OID

September 30th, 2009 20 comments

Storing Oracle database connection strings in OpenLDAP – or – “Oracle Names for the masses”

Typically Oracle stores it´s database connection strings in a file called TNSNAMES.ORA. Among storing every database connection string in this file there are other different naming schemas such as EZCONNECT, Oracle Names “ONAMES” which is deprecated with >= 10g or LDAP.

I´ve seen a lot of users with a centralized tnsnames.ora file placed on a cifs share and access by every oracle client. Using such a technique with unix hosts is not straight forward (i am no friend of having cifs shares mounted on database servers) and often due to security restrictions not possible. Sometimes i have also seen the tnsnames distributed by using rsync or just copy-on-demand.

From my point of view there is a superior solution for this problem: Oracle Names.

Read more…

Categories: Oracle in general Tags:

Oracle 11g Release 2 – SSH equivalence gone after reboot

September 28th, 2009 5 comments

Today i came across a strange phenomenon:

I set up ssh equivalence during database installation but after rebooting the system some tool complained about misconfigured ssh equivalence. So i digged a little bit. I found out SSH is complaining about wrong directory permissions:

Sep 28 13:57:03 rac1 sshd[31620]: Authentication refused: bad ownership or modes for directory /u01/app/oracle/product/11.2.0/ora11p

The directory mentioned here is an ACFS shared oracle home and the home directory for the oracle user. After changing the mode for the directory to 755 ssh equivalence was restored. But after stopping and starting acfs resource the equivalence was gone again.

So i worked a little bit on that:

Lets first check directory permission with ACFS resource not started and not mounted:

[root@rac1 ~]# ll /u01/app/oracle/product/11.2.0/
total 4
drwxr-xr-x 2 ora11p dba 4096 Sep 13 00:02 ora11p
[root@rac1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3              26G   13G   12G  52% /
/dev/sda1              99M   12M   83M  13% /boot
tmpfs                 1.9G  164M  1.7G   9% /dev/shm

Directory Mode is 775.

So now lets mount ACFS volume manually and check the permission again:

[root@rac1 ~]# mount /dev/asm/ora11p_home-132 /u01/app/oracle/product/11.2.0/ora11p/ -t acfs
[root@rac1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3              26G   13G   12G  52% /
/dev/sda1              99M   12M   83M  13% /boot
tmpfs                 1.9G  164M  1.7G   9% /dev/shm
/dev/asm/ora11p_home-132
 10G  4.7G  5.4G  47% /u01/app/oracle/product/11.2.0/ora11p
[root@rac1 ~]# ll /u01/app/oracle/product/11.2.0/
total 16
drwxrwx--- 83 ora11p dba 12288 Sep 25 10:37 ora11p

775 – still no surprise. So we change them to 775 as required by SSH:

[root@rac1 ~]# chmod 755 /u01/app/oracle/product/11.2.0/ora11p
[root@rac1 ~]# ll /u01/app/oracle/product/11.2.0/
total 16
drwxr-xr-x 83 ora11p dba 12288 Sep 25 10:37 ora11p

The directory show required permissions. We will now unmount and mount the file system again:

[root@rac1 ~]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda3             26306612  12828996  12119724  52% /
/dev/sda1               101086     11683     84184  13% /boot
tmpfs                  1933304    167016   1766288   9% /dev/shm
/dev/asm/ora11p_home-132
 10485760   4864328   5621432  47% /u01/app/oracle/product/11.2.0/ora11p
[root@rac1 ~]# umount /u01/app/oracle/product/11.2.0/ora11p
[root@rac1 ~]#
[root@rac1 ~]# umount /u01/app/oracle/product/11.2.0/ora11p
[root@rac1 ~]# mount /dev/asm/ora11p_home-132 /u01/app/oracle/product/11.2.0/ora11p/ -t acfs           
[root@rac1 ~]# ll /u01/app/oracle/product/11.2.0/
total 16
drwxr-xr-x 83 ora11p dba 12288 Sep 25 10:37 ora11p

There we are: Permission mode 775 is consistent accross mounts. So lets unmount everything and start acfs resource again:

[root@rac1 ~]# umount /u01/app/oracle/product/11.2.0/ora11p
[root@rac1 ~]#
[root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl start resource  ora.data2.ora11p_home.acfs
CRS-2672: Attempting to start 'ora.data2.ora11p_home.acfs' on 'rac1'
CRS-2672: Attempting to start 'ora.data2.ora11p_home.acfs' on 'rac2'
CRS-2676: Start of 'ora.data2.ora11p_home.acfs' on 'rac2' succeeded
CRS-2676: Start of 'ora.data2.ora11p_home.acfs' on 'rac1' succeeded
[root@rac1 ~]# ll /u01/app/oracle/product/11.2.0/
total 16
drwxrwx--- 83 ora11p dba 12288 Sep 25 10:37 ora11p

The directory mode 770 was changed to 770…. but by which component and configuration?

Until now i did not find a solution for this behavior yet. I will open a SR and see what Oracle thinks about this “feature”.

In the meantime to get rid of this behavior you can set “StrictModes no” in sshd_config to prevent ssh from checking directory ownerships.

Partition layout on Linux – Recommendations

September 24th, 2009 3 comments

Partition recommendations for Oracle on Linux

Every system engineer or even every DBA comes along the following question from time to time: “How do i arrange my partitions for using oracle on linux?”

In the following i am blogging my thoughts and recommendations on partition/storage design on the linux operating sytsem for oracle. My opinion might not be solely true and you are free to develop your own standards. But maybe the following lines help you.

Which file system shall i use?

Metalink has several notes on what file systems can/shall be used for oracle on linux. Metalink note 236826.1 states:

------- Begin Quote ---------

Supported File Systems

One of the most interesting features of the Linux OS is its support for multiple and various file systems. File systems can be defined and built on a partition basis. VFAT, ext2, ext3 and Reiser file systems can co-exist on the same Linux system, along with several other file systems and raw partitions (Note 224302.1 – Raw Devices on RedHat Advanced Server – FAQ).

Your choice of which one to use then becomes based on supportability, reliability, security and performance. Oracle generally does not certify file systems and does certify operating systems, but Linux is a specific case. On different Linux distributions, Oracle might choose to have certifications on different filesystems. When a problem is encontered, if a non-supported filesystem is being used for Oracle data, control and redo log files, Oracle will try to reproduce the problem on a supported and certified filesystem. If the problem does not reproduce on a certified filesystem, the specific filesystem / O/S vendor should be engaged for resolution.

The current support includes ext2, ext3 and NFS-based storage systems (e.g. NetApp). To be acceptable for database usage, the file system must support certain minimum functions such as write acknowledgement, write-through cache and file permissions. Counter examples are simple NFS, which lacks acknowledge and VFAT, which lacks file permissions.

Recommended File Systems

There are various file systems available for Linux OS:

– The ext2 and ext3 file systems are robust and fully supported. ext2 was the default file system under the 2.2 kernel. ext3 is simply the enhanced ext2 filesystem having journaling feature. ext3 is the detault filesystem for RHEL3 and 4.

– Oracle Cluster File System (OCFS) is a shared file system designed specifically for Oracle Real Application Cluster (RAC). OCFS eliminates the requirement for Oracle database files to be linked to logical drivers. OCFS volumes can span one shared disk or multiple shared disks for redundancy and performance enhancements (Note 224586.1).OCFS2 is the next generation of the Oracle Cluster File System for Linux. It is an extent based, POSIX compliant file system. Unlike the previous release (OCFS),

– OCFS2 is a general-purpose file system that can be used for shared Oracle home installations making management of Oracle Real Application Cluster (RAC) installations even easier.

In summary, the recommended filesystems are:

– Single node: Any filesystem that is supported by the Linux vendor. Note that any filesystem issues are need to be resolved by the Linux vendor.

– Multi-node (RAC): OCFS, raw, NFS-based storage systems (e.g. NetApp). (See Note 329530.1, Note 423207.1 and Note 279069.1 for GFS support)

------- End Quote ---------

From the authors point of view the following file systems / technologies shall be used for oracle on linux:

  • single node oracle
    • ext3
    • ASM
  • multi-node (RAC) oracle
    • ASM
    • NFS

Todays hard disk sizes always require a journaling file system. Using a non-journalig file system (like ext2) will create problems after a system crash when checking the file system on larger hard disks will take serveral hours. Thats completely avoidable. Using ext4 is not recommended for stability reasons and it´s reliability has yet to be prooven. In addition to that ext4 is currently not supported anyway. XFS is only supported on SuSE Enterprise Server and according to Note 414673.1 it cannot use asynch and direct io simultaneously. In addition to that the same notes states users shall not use XFS due to an undoucumented bug which can lead to redo log corruptions. At the time of writing the future of reiserfs is uncertain since Hans Reiser (developer of the file system) was sent to jail for murder.

So ext3 is currently the only available choice for oracle on linux because it is quite stable and – for use with LVM most important – supports online file system growth. For shrinking the file system still need to be unmounted. Please note you ext3 might force a full file system check every X months or every Y mounts. You can tune this behavior with “tune2fs”.

For clusters i do not recommend OCFS because from my point of view ASM is the future and further advanced piece of technology. With 11g Release 2 Oracle extended ASM to be a complete volume manager and added a cluster file system for ASM. You can read here more about that topic.

LVM – the logical volume manager

What is an logical volume manager?

Using a so called “logical volume manager”  – short: LVM – enables to abstract disk devices from file systems or partitions. A LVM takes one ore more disks and forms a storage unit. This unit is called differently depending on the LVM being used. The linux lvm name it: “volume group”.

The following picture illustrates how the LVM works:

lvm

As you can see the LVM takes one or more disks “physical device” or “physical volume” and forms a so called “volume group”. There can be one or more volume group. One disk always belongs to one volume group. From the volume group storage is exported as “logical volume”. On top of a logical volume a file system can be created and mounted or the logical volume can be used directly as raw device.

More information can be found in the linux lvm howto.

Features

  • Snapshots
  • Online-Resize (growing and shrinking) of logical volumes (not the file systems!)
  • supports multipathing
  • supports several RAID levels (concat, raid-0, raid-1, raid-5)

Restrictions / Disadvantages

Every recent Linux distribution (Red Hat ES/AS 4, Red Hat ES/AS 5, OEL 5, SuSE Linux Enterprise Server 10, SuSE Linux Enterprise Server 11) offers the possibility to use LVM out-of-the-box. However there are a few restrictions:

  • Due to limitations in grub bootloader /boot (i.e. the kernel and initial ram disk) cannot be placed inside a LVM; /boot needs to be a separate partition. There is a re-implementation of grub which will offer the possibility to boot from a /boot encapsulated in LVM.
  • Fixing boot problems is a little bit more complex because accessing the volumes require the LVM to be started and supported by the rescue system. Most distributions offering LVM support offer this as well in their recue systems. If not you can use SystemRescueCD (see “Tools” section at the end of this post)
  • Although it is possible to use LVM in multipathing scenarios most storage vendors require dedicated kernel drivers coming from the storage vendors directly.
  • Although a LVM can resize a logical volume the file system stored on this logical volume cannot be resized by the LVM directly. For doing so you have to use the file system specific tools (for ext2/ext3 the tool is named “resize2fs”)
  • Resizing a LUN/Disk already used by LVM requires a reboot after the partition used by the LVM was edited with fdisk to reflect the new partition size so the kernel recognizes the new size of the partition (this is also true for non-lvm configurations).

Shall i use an LVM?

From my point of view an LVM shall be used for:

  • easier and more flexible management of operating system storage
  • easier and more flexible management for oracle database binary installation location (according to the OFS this is /u01)

An LVM shall not be used for:

  • Multipathing if it is not supported/certified by your storage vendor
  • Storing any type of oracle data files (this includes: data files, archive logs, redo logs, flashback logs, control files, ….): LVM introduces a additional layer of software, complexity and source of problems especially in conjunction with oracle data files.
  • SWAP: There might be scenarios where the kernel runs out of memory and needs some pages from memory to be swapped out; if swap is lvm encapsulated the kernel might not be able to complete the request because the LVM requires some memory to be available which is not available – voila: a race condition! In addition to that swapping is already slow… encapsulating it in LVM makes it more slower. Furthermore SWAP is a quite static device.

Putting it all toghether

Partitions

Boot disks

Due to the restrictions mentioned above a typical hard disk layout for the boot disk (here: “/dev/sda”) looks like this:

Device          Size            Part.-Type       File system     Mountpoint
/dev/sda1       150 MB          linux            ext3            /boot
/dev/sda2       depends         linux swap       -               -
/dev/sda3       remain. space   linux lvm (8e)   -               -

The size of the swap space depends directly on the amount of memory you have. According to the oracle documentation you will need up to 16 GB swap (if you have more than 16 gb memory the swap space size will be static at 16 gb regardless of the actual memory size).

Non-Boot disks (for use with LVM)

For any additional disk used  by the LVM the partition layout migh look like this:

Device          Size            Part.-Type       File system     Mountpoint
/dev/sdb1       whole disk      linux lvm (8e)   -               -

Non-Boot disks (NOT used by the LVM)

Device          Size            Part.-Type       File system     Mountpoint
/dev/sdc1       whole disk      linux            ext3            /u02

Mount Points / Logical Volume / Physical device separation

The following listing is my standard listing for oracle database servers on linux:

Device                           Mount Point      File system      Size
/dev/sda1                        /boot            ext3             150 MB
/dev/sda2                        swap             swap             depends on memory
/dev/mapper/vg_root/lv_root      /                ext3             15 GB
/dev/mapper/vg_root/lv_var       /var             ext3             10 GB
/dev/mapper/vg_root/lv_tmp       /tmp             ext3             20 GB
/dev/mapper/vg_root/lv_u01       /u01             ext3             50 GB
/dev/sdc1                        /u02             ext3             complete disk
/dev/sdd1                        /u03             ext3             complete disk
...
/dev/sdN                         /u0N             ext3             complete disk

You note /boot and swap is not LVM encapsulated due to some resrictions outlined above. /, /var, /tmp and /u01 are separated by logical volumes with appropriate sizes. You might think the device sizes are way too large: You´re right. I tent to make the logical volumes a little bit larger so i do not need to resize them too soon. Every resize operatin might cause total data loss. So it is advisable to resize file systems or logical volumes as less often as possible. The first level directories /var, /tmp and /u01 are separated in logical volumes because data will be placed under this top level directories.

If you like to you can even have separate volumes for /home or the  oracle admin or the diagnostic directory.

I do not have a separate volume for /home because there are no users beside “root” and the oracle user on my systems which are allowed to store files. If you need space for user directories i recommend creating a dedicated /home volume as well.

With this volume layout we are using 50+20+10+15 GB = 95 GB plus 150 MB for /boot plus swap. Todays smallest server disks have at least 147 gb capacity (formatted something like 138 gb …). So this layout fits easily on these disks. If you have a server with 73 gb disks you can shrink primary /u01 and after that all the other volumes.

Tools

Categories: Oracle in general Tags:

Increasing data checks performed by RMAN with "check logical" switch

September 24th, 2009 No comments

By default RMAN checks for physical corruptions in data files when performing backups. For a logical check you can add “check logical” to your rman backup command. With this clause rman also checks the block integrity logically.

A few examples how to use it:

backup validate check logical database;

backup check logical database;

backup check logical incremental level 0 filesperset 4 database;

Checking for both logical and physical corruptions requires the init.ora parameter “db_block_checksum” set to “true” which is also the default. This parameter forces oracle to calculate a checksum for the block before writing it to disk. Doing logical checks with rman has a slight performance overhead when doing backups – typically 1 to 10 percent.

It is not recommended to turn this off for performance reasons bercause the performance overhead is minimal (< 2%) and you will be unable to detect corruptions!

Categories: Oracle in general Tags:

RMAN in Data Guard configurations – Problems of restoring rman disk backups and standby site and vice versa

September 24th, 2009 3 comments

… or: the problem of backup association

From time to time i come across the same problems in data guard configurations using rman-to-disk: Backups taken on the standby database are not taken into account when trying to perform an rman recovery on the primary site and vice versa.

This is an expected behavior according to the oracle documentation:

The accessibility of a backup is different from its association. In a Data Guard environment, the recovery catalog considers disk backups as accessible only to the database with which it is associated, whereas tape backups created on one database are accessible to all databases. If a backup file is not associated with any database, then the row describing it in the recovery catalog view shows null for the SITE_KEY column. By default, RMAN associates files whose SITE_KEY is null with the database to which it is connected as TARGET.

RMAN commands such as BACKUP, RESTORE, and CROSSCHECK work on any accessible backup. For example, for a RECOVER COPY operation, RMAN considers only image copies that are associated with the database as eligible to be recovered. RMAN considers the incremental backups on disk and tape as eligible to recover the image copies. In a database recovery, RMAN considers only the disk backups associated with the database and all files on tape as eligible to be restored.

To illustrate the differences in backup accessibility, assume that databases prod and standby1 reside on different hosts. RMAN backs up datafile 1 on prod to /prmhost/disk1/df1.dbf on the production host and also to tape. RMAN backs up datafile 1 on standby1 to /sbyhost/disk2/df1.dbf on the standby host and also to tape. If RMAN is connected to database prod, then you cannot use RMAN commands to perform operations with the /sbyhost/disk2/df1.dbf backup located on the standby host. However, RMAN does consider the tape backup made on standby1 as eligible to be restored.

Note: This is a quote from the 11g R1 documentation. The same text apprears in 10g R2 documentation as well.

Categories: Oracle in general Tags:

Managing the AWR (automatic workload repository)

September 24th, 2009 1 comment

Overview on the AWR

The automatic workload repository “AWR” was introduced with Oracle 10g. It collects performance data in intervals (default: 60 minutes) and keeps them by default for one week. The data is stored in the SYSAUX tablespace.

The AWR collects:

  • Wait events
  • time model statistics (view: V$SESS_TIME_MODEL and V$SYS_TIME_MODEL)
  • active session hitory information (view: V$ACTIVE_SESSION_HISTORY)
  • system statistics (view: V$SYSSTAT, V$SESSTAT)
  • resource intensive sql statements
  • information on object usage

For AWR to collect these data the database parameter STATISTICS_LEVEL has to be set to TYPICAL (= the defaul) or ALL. Setting it to BASIC disables data collection!

Changing the AWR settings

Changing the snapshot interval and retention time

For changing the snapshot interval and/or the retention time use the following syntax:

exec dbms_workload_repository.modify_snapshot_settings(
interval => 60
retention => 525600);

The example printed above changes the retention time to one year (60 minutes per hour * 24 hours a day * 365 days per year = 525600). It does not alter the snapshot interval (60 minutes is the default). If you want to you can alter this as well but keep in mind you need aditional storage to do so. Setting the interval value to zero completely disables data collection.

According to the oracle documentation depending on the size of the database the SYSAUX tablespace which contains the AWR objects can be as big as 5 gb in default configuration (60 minutes snapshot interval, 7 days retention time). So if you want to keep your data for one year you might end up with a SYSAUX of 260 GB (assuming a linear growth with the retention time).

Manually creating a snapshot

You can of course create a snapshot maually by executing:

exec dbms_workload_repository.create_snapshot;

Query available snapshots

To query the available snapshot you can use this query:

select snap_id, begin_interval_time, end_interval_time
from dba_hist_snapshot
order by snap_id;

Removing a snapshot

To remove a snapshot just do:

exec dbms_workload_repository.drop_snapshot_range
  (low_snap_id=>1, high_snap_id=>10);
Categories: Oracle in general Tags:

Oracle 11 Release 2 Install Guide – Install Oracle RAC

September 23rd, 2009 6 comments

System configuration

Hardware configuration

  • two virtual machines (VMWARE)
    • 1 vCPU
    • 4 GB RAM
    • 40 GB local Disk
  • Storage exported via ISCSI
    • 4 LUNs with 10 GB each
    • 2 LUNs with 30 GB each

Operating system configuration

  • Oracle Enterprise Linux 5.3 x86_64 (Kernel 2.6.18-128.el5)
  • Installed packages: default system + development packages

Cluster configuration

  • Cluster Name: „RAC“
  • Grid Binary installation on local disk
  • OCR, Voting and datafiles stored in ASM
  • Oracle HOME stored on ACFS file system (this menas we will have a SHARED home directory among all nodes!)
  • Oracle HOME will be installed unser user „ora11p“

Installation – Requirements and Preparations

Installation Steps outlined

  1. Installation of Oracle 11g Release 2 Grid Infrastructure → done here
  2. Installation of Oracle 11g Release 2 Database (rac installation)
    1. Review system requirements
    2. Install database software (aka „binaries“)
    3. Create database

general Requirements

All requirments from grid infrastructure installation apply here as well, for instance:

  • Kernel parameter
  • Limits (esp. configure limits for new user „ora11p“ which will hold the binary database installation)
  • Synchronous time
  • dns-resolvable SCAN name
  • working public and private interconnect
  • shared and accessible storage
  • at least ONE better two more disk groups created:
    • one for database files and binary installation
    • one for flashback recovery area
  • Note: SSH equivalence will be set up by installer

Preparations

Create ACFS file system for storing oracle binary installation

  • Create further ACFS mount directory
    mkdir -p /u01/app/oracle/product/11.2.0/ora11p
  • Create ADVM volume

step4_010

  • Create ACFS file system

step4_011

  • DO NOT register ACFS file system mountpoint for the oracle home directory! We will do this later (when creating the database) as clusterware resource
  • Mount ACFS file system on both nodes ( “mount /dev/asm/ora11p_home-132 /u01/app/oracle/product/11.2.0/ora11p”)

step4_014

 

Create User on both nodes:

useradd -u 501 -g dba -d /u01/app/oracle/product/11.2.0/ora11p ora11p
passwd ora11p
chown -R root:dba /u01/app/oracle/product
chmod -R 775 /u01/app/oracle/product
chown -R ora11p:dba /u01/app/oracle/product/11.2.0/ora11p

Create .bash_profile for user ora11p (note: changes on node A will be visible on node B cause were using ACFS; so changing profile file is needed only once)

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/ora11p
export ORACLE_SID=ORA11P
if [ -t 0 ]; then
stty intr ^C
fi

One word on Cluster Verification utility (cluvfy)

  • should be used prior installation
  • At this point the user „ora11p“ does not have SSH equivalence configured
  • So there are two options:
    • start cluvfy after installer set up ssh equivanlence
    • set up SSH equivalence manually
  • I personally prefer to run cluvfy after installer set up ssh equivalence

However the database installer seems to run a more or less complete check automatically

If you want to start cluvfy here is the syntax:

cluvfy stage -pre dbinst -fixup -n nodeA,nodeB -r <release> -osdba <name of sysdba group on unix> -verbose
<release> is: 10gR1, 10gR2, 11gR1, 11gR2

Installation

Start installer als user „ora11p“ on any node

step4_015

For testing purposes i deselected update notifications… in productive environments this is highly recommended

step4_016

we will install database software only and create database later

step4_017

both nodes were discovered correctly….

step4_018

Because were installing as user „ora11p“ which is different from the user holding our infrastructure installation we need to create passwordless ssh connectitivty

!! you also need to select „user home is shared“ cause were using ACFS !!

step4_019

step4_019

SSH connectivity successfully established…

step4_020

Select language support

step4_021

Select Edition

step4_022

Select Options

step4_023

Select ORACLE_BASE and ORACLE_HOME; we set it in profile file so it is entered by the installer automatically

step4_024

step4_025

Installation Summary

step4_026

Install process

step4_027

step4_028

Installation nearly finished.. just start „root.sh“ as root on all nodes

step4_029

root.sh sample output

step4_030

Finished

step4_031

Where to go now?

  • We just installed all neccessary components for creating our first RAC database → this will be the next post
  • backup current configuration
Categories: Oracle 11g Release 2 Tags: