Archive

Archive for October, 2009

How to copy schema stats from one schema to another schema

October 30th, 2009 7 comments

The following post is a short description how to copy schema statistics from one schema to another. Basically there are two methods for doing this:

  • using DBMS_STATS or
  • using Data Pump

This is especially useful when doing large data transfers between either schemas or databases and statistics are needed fast. Remember that data density, row chaining or index cluster factor changed due to the data import and index rebuilds.

Read more…

Categories: Oracle in general Tags:

Calculate required kernel parameters for running Oracle 11g Release 2 on Linux

October 28th, 2009 3 comments

The following is a short guide on how to calculate the required kernel parameters for running Oracle 11g Release 2 database on Linux.

In addition to that i add some parameters recommended by myself.

Note that for running Oracle Grid Infrastrucure (aka “Clusterware”) some additional parameters might be required.

Read more…

ASM resilvering – or – how to recover your crashed cluster

October 27th, 2009 1 comment

In this and the following posts i will perform some crash and recover scenarios and show how to recover the cluster successfully.

At the moment the following tests are planned and will be published during the next days:

The environment used for the posts are explained in detail here.

Useful scripts can be found here.

ASM resilvering – or – how to recover your crashed cluster – Test no 5

October 27th, 2009 1 comment

In this post we will reproduce a more common scenario: We will forcefully remove the asm disk device from the operating system. This simulates common errors like a crashed storage system or some kind of interrupted connectivity (cable plugged, power failed, …).

All test are available here.

Read more…

Categories: Oracle in general Tags:

New Server

October 26th, 2009 No comments

As you might have noticed the Blog moved to a new server.

This is a dedicated server so that i am able to install plugins, modify the layout and do much more (like hosting files different than images and documents).

The work required to setup/configure the new server and move the blog kept me from posting. But after thats finished now i will start posing again.

If you experience problems please write me an email or leave a comment.

Categories: Uncategorized Tags:

Backing up your system configuration

October 21st, 2009 No comments

When backing up servers with any kind of backup software you will most certainly backup your data.

But how about the system configuration?

When restoring a system from scratch do you know how big the disks were, what partition sizes, what disks used (in case you have more than one disk), if you used an LVM and so on.

Think about it!

In this article i wanted to summarize what system information should be collected to enable a fast and error free restore.

Update: There is an article regarding this topic here.

Feel free to add your own comments and suggestions.

Read more…

Categories: Oracle in general Tags:

Oracle on linux – yes of course – but what linux?

October 19th, 2009 No comments

There is a discussion from December 2008 what Linux (SLES vs. Red vHat vs. Oracle Enterprise Linux) to use for running oracle on Linux by Yann Neuhaus. You can read his post here.

After nearly one year i wanted to catch up the article and check if the pros and cons are still valid or if there changed anything.

Read more…

Categories: Oracle in general Tags:

ORA-01555 (snapshot too old) error although undo_retention set to a high value

October 17th, 2009 No comments

The past week i had a argumentation with a colleague of mine on the behavior of the parameter undo_retention in 10g.

He stated in 10g undo_retention determines the lower number of seconds oracle keeps the undo records. If there is enough space available an ORA-01555 should not be raised.

I knew there are cases in which undo_retention is set to a ridiculous high value (for instance a week), the undo tablespace has a lot of free space but an ORA-01555 is raised nonetheless. So we took a look at the documentation:

For Oracle 10g Release 1 the documentation states:

UNDO_RETENTION specifies (in seconds) the low threshold value of undo retention. The system retains undo for at least the time specified in this parameter and automatically tunes the undo retention period to satisfy the undo requirements of the queries.

But for Oracle 10g Release 2 the documentation says different:

UNDO_RETENTION specifies (in seconds) the low threshold value of undo retention. For AUTOEXTEND undo tablespaces, the system retains undo for at least the time specified in this parameter, and automatically tunes the undo retention period to satisfy the undo requirements of the queries. For fixed- size undo tablespaces, the system automatically tunes for the maximum possible undo retention period, based on undo tablespace size and usage history, and ignores UNDO_RETENTION unless retention guarantee is enabled.

In Oracle 11g Release 1 and Oracle 11g Release 2 the behavior is the same as for Oracle 10g Release 2.

Categories: Oracle in general Tags:

11g Release 2 release rumors

October 16th, 2009 No comments

There are rumors on the release dates of 11.2.0.1.0 for different plattforms. According to my information the release dates are:

  • Windows: second quarter 2010
  • Linux: already released (32-bit and 64-bit intel)
  • HP-UX: 4th quarter 2009
  • Solaris: 4th quarter 2009
  • AIX 5L based: 4th quarter 2009

I personally expect the release of 11g Release 2 for solaris operating system pretty soon.

Categories: Oracle in general Tags:

Data Pump falsely reports the number of imported rows as 1 (one)

October 16th, 2009 1 comment

Yesterday i did a large import using data pump and the network mode feature. I noticed a table containing a large number of rows were reported by data pump as imported but with only one (1) row.

First i checked my import options, checked for errors in source and target database and even cleaned up everything and restarted the import. Surprise surprise: Same error again.

The environment was:

  • Oracle Enterprise Edition 10.2.0.4.0
  • Solaris (SPARC) 10
  • Data Pump Import with NETWORK_MODE
  • excluding STATISTICS, GRANTS and ROLE_GRANTS

Extract  from the data pump log file:

Processing object type SCHEMA_EXPORT/CLUSTER/CLUSTER
 Processing object type SCHEMA_EXPORT/CLUSTER/INDEX
 Processing object type SCHEMA_EXPORT/TABLE/TABLE
 . . imported "USER"."TABLE_A"                 55082451 rows
 . . imported "USER"."SOME_LARGE_TABLE"       307126916 rows
 . . imported "USER"."ANOTHER_TABLE"          176936257 rows
 . . imported "USER"."JUST_ANOTHER_TABLE"     215682029 rows
 . . imported "USER"."HUGE_TABLE"                     1 rows        <====
 . . imported "USER"."AND_SO_ON"              133356302 rows

So i started to dig a little bit deeper and noticed the source table has approx 62 million rows with a size of 28 GB. To my surprise the target database also reported the table with 28 gb in DBA_SEGMENTS.

So i did a simple:

SQL> select count(1) from user.huge_table;
COUNT(1)
 ----------
 62485290

Guess what? The table was imported correctly with all rows while data pump reported only one imported row.

I check metalink for related errors but found nothing.

I can only guess if this is a bug or there is some time limit when querying the rows of the just imported tables.

Update: I did some reseach and created a really large table with 1 billion rows the data pump showed:

. . imported "USER"."TABLE_WITH_1_BILLION_ROWS"             -1416726547 rows

There might be some kind of counter overflow…. i will investiage this further.

Categories: Oracle in general Tags: