New Server

As you might have noticed the Blog moved to a new server.

This is a dedicated server so that i am able to install plugins, modify the layout and do much more (like hosting files different than images and documents).

The work required to setup/configure the new server and move the blog kept me from posting. But after thats finished now i will start posing again.

If you experience problems please write me an email or leave a comment.

Posted in Uncategorized | Leave a comment

Backing up your system configuration

When backing up servers with any kind of backup software you will most certainly backup your data.

But how about the system configuration?

When restoring a system from scratch do you know how big the disks were, what partition sizes, what disks used (in case you have more than one disk), if you used an LVM and so on.

Think about it!

In this article i wanted to summarize what system information should be collected to enable a fast and error free restore.

Update: There is an article regarding this topic here.

Feel free to add your own comments and suggestions.

Continue reading

Posted in Oracle in general | Leave a comment

Oracle on linux – yes of course – but what linux?

There is a discussion from December 2008 what Linux (SLES vs. Red vHat vs. Oracle Enterprise Linux) to use for running oracle on Linux by Yann Neuhaus. You can read his post here.

After nearly one year i wanted to catch up the article and check if the pros and cons are still valid or if there changed anything.

Continue reading

Posted in Oracle in general | 2 Comments

ORA-01555 (snapshot too old) error although undo_retention set to a high value

The past week i had a argumentation with a colleague of mine on the behavior of the parameter undo_retention in 10g.

He stated in 10g undo_retention determines the lower number of seconds oracle keeps the undo records. If there is enough space available an ORA-01555 should not be raised.

I knew there are cases in which undo_retention is set to a ridiculous high value (for instance a week), the undo tablespace has a lot of free space but an ORA-01555 is raised nonetheless. So we took a look at the documentation:

For Oracle 10g Release 1 the documentation states:

UNDO_RETENTION specifies (in seconds) the low threshold value of undo retention. The system retains undo for at least the time specified in this parameter and automatically tunes the undo retention period to satisfy the undo requirements of the queries.

But for Oracle 10g Release 2 the documentation says different:

UNDO_RETENTION specifies (in seconds) the low threshold value of undo retention. For AUTOEXTEND undo tablespaces, the system retains undo for at least the time specified in this parameter, and automatically tunes the undo retention period to satisfy the undo requirements of the queries. For fixed- size undo tablespaces, the system automatically tunes for the maximum possible undo retention period, based on undo tablespace size and usage history, and ignores UNDO_RETENTION unless retention guarantee is enabled.

In Oracle 11g Release 1 and Oracle 11g Release 2 the behavior is the same as for Oracle 10g Release 2.

Posted in Oracle in general | Leave a comment

11g Release 2 release rumors

There are rumors on the release dates of 11.2.0.1.0 for different plattforms. According to my information the release dates are:

  • Windows: second quarter 2010
  • Linux: already released (32-bit and 64-bit intel)
  • HP-UX: 4th quarter 2009
  • Solaris: 4th quarter 2009
  • AIX 5L based: 4th quarter 2009

I personally expect the release of 11g Release 2 for solaris operating system pretty soon.

Posted in Oracle in general | 1 Comment

Data Pump falsely reports the number of imported rows as 1 (one)

Yesterday i did a large import using data pump and the network mode feature. I noticed a table containing a large number of rows were reported by data pump as imported but with only one (1) row.

First i checked my import options, checked for errors in source and target database and even cleaned up everything and restarted the import. Surprise surprise: Same error again.

The environment was:

  • Oracle Enterprise Edition 10.2.0.4.0
  • Solaris (SPARC) 10
  • Data Pump Import with NETWORK_MODE
  • excluding STATISTICS, GRANTS and ROLE_GRANTS

Extract  from the data pump log file:

Processing object type SCHEMA_EXPORT/CLUSTER/CLUSTER
 Processing object type SCHEMA_EXPORT/CLUSTER/INDEX
 Processing object type SCHEMA_EXPORT/TABLE/TABLE
 . . imported "USER"."TABLE_A"                 55082451 rows
 . . imported "USER"."SOME_LARGE_TABLE"       307126916 rows
 . . imported "USER"."ANOTHER_TABLE"          176936257 rows
 . . imported "USER"."JUST_ANOTHER_TABLE"     215682029 rows
 . . imported "USER"."HUGE_TABLE"                     1 rows        <====
 . . imported "USER"."AND_SO_ON"              133356302 rows

So i started to dig a little bit deeper and noticed the source table has approx 62 million rows with a size of 28 GB. To my surprise the target database also reported the table with 28 gb in DBA_SEGMENTS.

So i did a simple:

SQL> select count(1) from user.huge_table;
COUNT(1)
 ----------
 62485290

Guess what? The table was imported correctly with all rows while data pump reported only one imported row.

I check metalink for related errors but found nothing.

I can only guess if this is a bug or there is some time limit when querying the rows of the just imported tables.

Update: I did some reseach and created a really large table with 1 billion rows the data pump showed:

. . imported "USER"."TABLE_WITH_1_BILLION_ROWS"             -1416726547 rows

There might be some kind of counter overflow…. i will investiage this further.

Posted in Oracle in general | 1 Comment

New public oracle yum server

I just found a new public yum server hosted by oracle. This server can be used to install missing packages over the internet. It does not contain any security updates or bug fixes.

The public yum server can be found here.

For larger installation you can run your own yum server.

Posted in Oracle in general | 3 Comments

Configuring a small DNS server for SCAN

A few users asked what to do if there is no DNS server available for configuring the SCAN names when installing oracle grid infrastructure.

Most asked if they can use hosts file entries. The short answer is: Nope. The grid infrastructure will install fine but the cluster verification utility will fail.

So my suggestion is as follows: Create your own small DNS server on your rac nodes. This is quite fast and easy and described in this article.

Continue reading

Posted in Oracle in general | 28 Comments

ASM resilvering – or – how to recover your crashed cluster – Test no 4

Test #4: Corrupting the ASM disk with ASM disk group being online and active

After overwriting the ASM disk header while the disk group was offline we will now put some load on the full running cluster and corrupt the asm disk slightly.

Continue reading

Posted in Oracle 11g Release 2, Oracle ASM | 3 Comments

The Oracle Database File System (DBFS)

The following post is a summary of my tests with DBFS on a 11g Release 2 RAC.

DBFS – An Overview

Since the introduction of LOBs the database was used to store all kinds of files, for instance images, audio files, video files or even text files. Starting with Oracle 11g Release 1 introduced SecureFiles which introduced compression and de-duplication.

Oracle Database Filesystem (DBFS) creates a mountable file system which can be used to access files stored in the database as SecureFile LOBs. DBFS is a shared file system like NFS and consists of a server (the database) and a client. Just like a traditional database connection server and client can be on different systems which communicate over SQLNet. The same applies to DBFS as well.

The following figure illustrates DBFS:

dbfs

(Source: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10645/adlob_fs.htm)

Continue reading

Posted in Oracle 11g Release 2, Oracle in general | 6 Comments