Perhaps a little bit late but this post is a catch-up of interesting speeches held at the recent Oracle open world 2009:
- Database Consolidation Tips by Husnu Sensoy: you can get his presentation in his blog or here as local mirror
Perhaps a little bit late but this post is a catch-up of interesting speeches held at the recent Oracle open world 2009:
Sometimes administrators want to know how much memory is being used by a program (and not per process). This is especially useful to calculate the memory consumption for an oracle instance. Today i found a nice script for doing so here (local mirror).
When called on an oracle database server it produces the following output:
host:~ # python mem.py Private + Shared = RAM used Program 308.0 KiB + 0.0 KiB = 308.0 KiB init 84.0 KiB + 264.0 KiB = 348.0 KiB zmd-bin 120.0 KiB + 284.0 KiB = 404.0 KiB irqbalance 116.0 KiB + 404.0 KiB = 520.0 KiB acpid 140.0 KiB + 400.0 KiB = 540.0 KiB portmap 216.0 KiB + 428.0 KiB = 644.0 KiB auditd 104.0 KiB + 544.0 KiB = 648.0 KiB hald-addon-storage 364.0 KiB + 308.0 KiB = 672.0 KiB klogd 232.0 KiB + 444.0 KiB = 676.0 KiB wrapper 108.0 KiB + 572.0 KiB = 680.0 KiB hald-addon-acpi 380.0 KiB + 304.0 KiB = 684.0 KiB udevd 212.0 KiB + 508.0 KiB = 720.0 KiB resmgrd 164.0 KiB + 592.0 KiB = 756.0 KiB dd 424.0 KiB + 644.0 KiB = 1.0 MiB slpd 576.0 KiB + 496.0 KiB = 1.0 MiB dbus-daemon 332.0 KiB + 792.0 KiB = 1.1 MiB cron (2) 584.0 KiB + 608.0 KiB = 1.2 MiB mingetty (6) 508.0 KiB + 712.0 KiB = 1.2 MiB nscd 384.0 KiB + 880.0 KiB = 1.2 MiB vsftpd 276.0 KiB + 1.0 MiB = 1.3 MiB sh 344.0 KiB + 1.1 MiB = 1.4 MiB mysqld_safe 872.0 KiB + 660.0 KiB = 1.5 MiB syslog-ng 856.0 KiB + 712.0 KiB = 1.5 MiB ndo2db-3x (2) 820.0 KiB + 1.1 MiB = 1.9 MiB powersaved 468.0 KiB + 1.6 MiB = 2.1 MiB pickup 496.0 KiB + 1.7 MiB = 2.1 MiB master 712.0 KiB + 1.6 MiB = 2.3 MiB qmgr 1.0 MiB + 1.3 MiB = 2.4 MiB bash 720.0 KiB + 1.9 MiB = 2.6 MiB smtpd 2.0 MiB + 1.1 MiB = 3.0 MiB mount.smbfs (2) 2.3 MiB + 932.0 KiB = 3.2 MiB hald 1.4 MiB + 1.9 MiB = 3.3 MiB sshd (2) 1.4 MiB + 2.1 MiB = 3.5 MiB smbd (2) 996.0 KiB + 2.6 MiB = 3.5 MiB nagios 3.1 MiB + 1.2 MiB = 4.3 MiB cupsd 2.9 MiB + 1.8 MiB = 4.7 MiB ntpd 5.7 MiB + 716.0 KiB = 6.4 MiB perl 7.7 MiB + 776.0 KiB = 8.5 MiB named 6.6 MiB + 5.9 MiB = 12.5 MiB httpd2-prefork (11) 11.9 MiB + 1.2 MiB = 13.1 MiB slapd 9.7 MiB + 4.4 MiB = 14.1 MiB emagent 10.1 MiB + 5.3 MiB = 15.5 MiB tnslsnr (2) 13.3 MiB + 4.9 MiB = 18.2 MiB exp 22.1 MiB + 516.0 KiB = 22.6 MiB nmbd 103.8 MiB + 1.2 MiB = 105.0 MiB mysqld-max 569.7 MiB + 9.4 MiB = 579.1 MiB java (4) 1.5 GiB + 3.4 GiB = 4.9 GiB oracle (169) Private + Shared = RAM used Program Warning: Shared memory is slightly over-estimated by this system for each program, so totals are not reported.
You can see the running oracle programs sum up to 4.9 GB memory useage. Keep in mind several local running oracle instances are counted together here because summarization is being done by program!
Installing oracle clusterware has been made easier and easier from release to release. When installing 10g R1 you struggled with ssh equivalence, wrong directory permissions and script bugs was a pain. 10g R2 was better but still quite painful (especially the step from 10.2.0.3.0 to 10.2.0.4.0). With 11g R1 installing the clusterware was made fairly robust and was even more simplified with 11g R2.
One important part of installing the oracle clusterware stack is to ensure device names for OCR and Voting disks do not change (called “persistent naming” or “persistent binding”). Starting with 11g R2 OCR and Voting disks can also be stored in ASM which eliminates the need for configuring persistent device names because ASM automatically detects the disks and ASM does not rely on fixed device names.
This post is a small guide how to configure persistent device names for the OCR and voting disks when installing oracle clusterware 10g R2 and above.
The experiences outlined here are based on the white paper “Configuring udev and device mapper for Oracle RAC 10g Release 2 and 11g” from oracle available here.
It´s been a while since my last post because i was busy doing some projects. One of these projects involved installing a rac cluster with ASM in normal redundancy mode. My experiences installing this configuration is covered in this article.
The customer requested the installation of an 2-node-rac cluster running a 10g Release 2 database. Storage was attached via fibre channel coming from two EMC AX25 storage systems directly attached to both nodes. The cluster was installed with 11.1.0.7.0 clusterware and 11.1.0.7.0 asm. The database to be run was 10.2.0.4.2. By using ASM and normal redundancy mode both storage arrays were mirrored against each other. Everything worked well – too well. So after installing the whole cluster and setting up the database instance we performed some tests:
The first test was to interrupt the connection (pull the cable!) between storage array A and node A. From my experiences with 11g R2 and asm in normal redundancy mode i expected to database to stay up and running. To my surprise the database on node A crashed. In addition to that i was unable to start the instance again. I cannot give the exact error message because the project ended and i am not allowed to disclose the messages. Among several ORA-00600 messages i also saw messages saying the database was unable to write to the control file and open the redo logs. That was strange because ASM already started to dropping the missing disks from the disk group. In addition dropping the missing disks was not that easy. After re-adding the disks back to the disk group i tried to delete one LUN from the operating system selectively thus keeping communication, links and so on intact. The result did not change: The database instance on the affected node crashed and i was unable to start the instance again. After searching at meta link which yielded nothing we decided to give 11.1.0.7.1 a try:
After installing and creating a 11.1.0.7.1 database and increasing disk group compatibility from 10.2 to 11.1 we tried again to interrupt the connection between one storage array and one node. This time the instance crashed with an I/O error and was terminated by the log writer but was restarted seconds after that fine – a great improvement over 10.2. After discussion the customer decided to go with 11.1.0.7.1 instead of 10.2.0.4.2 and we continued our tests which were completed successfully. These tests involved for instance interrupting the communication from one storage array to both nodes and re-establishing the connection again (after waiting 15 minutes).
My conclusion is: ASM in mirrored configuration can be used from 11g Release 1 onwards and gets well with 11g R2 (see my tests with 11g R2 and ASM in normal redundancy configuration). With 10g ASM should rely on external redundancy.
As always: Please comment! I´m looking forward from your experiences with ASM in normal or high redundancy configurations and 10g R2 or 11g R1 databases.
I found the following question in my blog statistics:
“can we change kernel parameter in linux when oracle is running”
The answer is: Yes, of course!
You can alter any kernel parameter as long as it don’t require a reboot. If you set any kernel parameter to a lower value than your running database requires there wont be any impact in time – but there might be problems when your database continues to run (e.g. unable to start new sessions, degraded performance).Once your database is shut down and you lowered for instance the SHMMAX parameter you wont be able to start the instance.
Therefore you should make sure to increase the values and check twice!
I developed this topic last year while taking the OCP course. The instructor told us about oracle authentication and because i was a little bit bored i played around with it. My goal was to use oracle external authentication to authenticate against a radius server which authenticates against an active directly (or any other LDAP server).
With this technology you can implement a centralized password and account management quite easily although users still have to be created in the database and they still have to be granted roles and permission to them in the database. Active Directory (i will use this term in the whole article…. as said instead of active directory use can use another other LDAP server as well) is solely used to:
For using this authentication mechanism the “Oracle Advanced Security Option” is needed.
Classic Metalink retired the last weekend. New flash-bashed Metalink currently faces a lot of problems (to be honest: it is at the moment [12th November 2009] completely unusable). Beside this major errors there are other problems: such as incredible high cpu demand (on my system flash player takes 100% CPU accessing metalink).
If you´re looking for a non-flash alternative there is an HTML metalink interface available. You can find it here: https://supporthtml.oracle.com. At themoment the HTML portal is extremely slow and unstable….. but for companies which do not allow flash to be installed at all the only solution.
As of today i successfully passed the exam to become an OCE. Next certifiecation i am working on is “Oracle Database 10g: Real Application Clusters Administrator Certified Expert”.
A few months ago i tested yet another syslog implementation: rsyslog. Among all other available syslog implementation such as syslog or syslog-ng rsyslog offers some nice features such as:
So i took the oracle module for rsyslog and tried to get it working. A documentation did not relly exists so i wrote one which is part of the module now. Recently i took a look at rsyslogd again and did some test with rsyslog – especially how to store syslog messages and log oracle audit messages in a queryable way in an oracle database. My experiences will be covered in this article.
Disclaimer: All scripts, packages, procedures are released under the GPL. You can use them freely but on your own risk. However i would like you to send me the changes you made so i can perhaps improve the components.
As of today (09th November 2009) Oracle 11g Release 2 is available for Solaris SPARC.
You can download it from OTN.
The release includes database, client and grid infrastructure (aka “clusterware” as well).
According to the release schedule Oracle 11g Release 2 for Solaris INTEL is most likely to be released next. Windows is scheduled for the second quarter in 2010. See my previous post for more information about possible release dates.