Using data pump to export and import data using the network mode feature

Data Pump is a great improvement of the old fashioned export and import utility. Data Pump is available since 10g Release 2 and offers some nice new features, for instance a API to be used to start exports and imports from within the database. Beside these features there is one major feature using data pump: SPEED. Data pump is by factors faster that traditional export/import.

A more comprehensive description about data pump can be found here or here. Documentation for version 11g Release 2 is available here.

One of the downsides of data pump – especially for unix dbas – is that data pump in oracle 10g is unable to compress the data (however it does compress metadata but savings are negligible) and is also unable to use pipes (which can be used to compress data if the software does not support this).Starting with 11g Release 1 the ability to compress data was added but required the expensive “Advanced Compression” license.

Another useful new data pump feature is the so called “network mode” which transfers the data to be imported over a database link instead of exporting the data into a file, transferring the file(s) to the destination server and importing the the export file(s).

This article is a short documentation about how to use data pumps network mode feature for importing data to remote (source and destination database are different) or local (source and destination database are the same) database instances. For general information how to use data pump or an introduction refer to the oracle documentation.

Preparations

Database connection

Prior starting all the work you need a working database connection from the DESTINATION to the SOURCE database.

Creating a database directory to work with

Unlike old export/import which has a client/server architecture data pump runs solely in the database. The client being used is you a frontend which issues calls to the datapump api and displays the status. In order to write data from the database data pump requires at least one so called “directory” to be created.

You can query the current directories with:

select * from dba_directories;

To create an additional directory:

create directory <name> as '<path>'

for instance:

create directory dpump_dir as '/u01/export';

Creating the database link (in destination system schema)

Before exporting the data over a network link you need to create a database link in the DESTINATION database first. The link can be created either in the destination user schema (the user need to be created beforehand) or in the SYSTEM schema. If the link is created in SYSTEM schema the user will be created automatically with the same grants during the import.

For creating the link in the SYSTEM schema:

create database link source_link connect to system identified by manager using 'ora11p';

The database connection string for the database “ora11p” is defined in the local TNSNAMES.ORA.

Creating the database link (in destination user schema)

For creating the database link in the destination user schema you need to create the user first. If done create the link with:

create database link source_link_test connect to test identified by test using 'ora11p';

Caveats when mixing privileged and non-privileged users

There is one caveat when mixing privileged (e.g. DBA users) with non privileged users data pump will complain about that:

ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user

You need to look out for database links created in the SYS or SYSTEM schema pointing to a non-DBA-user in the remote database or vice versa.

For example the following link created the in SYSTEM schema wont work because SYSTEM is a dba user and test is not:

create database link source_link_test connect to test identified by test using 'ora11p';

Importing the data

Basic example

The following example imports schema “TEST” into “TEST2” using the database link named “SOURCE_LINK” created above and creates the user TEST2 if it does not exists:

bash# impdp system/manager logfile=data_pump_dir:import_test2.log network_link=source_link remap_schema=test:test2 schemas=test

Import: Release 11.1.0.7.0 - 64bit Production on Friday, 30 October, 2009 14:33:24

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":  system/******** logfile=data_pump_dir:import_test2.log network_link=source_link remap_schema=test:test2 schemas=test
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported "TEST2"."TAB1"                                  12 rows
. . imported "TEST2"."TAB2"                                  12 rows
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully completed at 14:33:45

Switches explained

The following switches enhance data pump abilities and can be used in conjunction:

remap_schema

“REMAP_SCHEMA”=TEST:TEST2” is identical to the former “fromuser=test touser=test2” in the old export/import utility.

remap_tablespace

The switch “REMAP_TABLESPACE=<OLD>:<NEW>” can be used to create the objects in an alternative tablespace at import time. Using traditional export/import this was not directly possbile.

parallel

The switch “PARALLEL=<number>” defines the parallelism data pump will work with. The higher the value the more parallel processes data pump will use.

flashback_scn

“FLASHBACK_SCN” take a SCN of the source database instance (“select current_scn from v$database”) to keep the data consistent to during the import. Make sure your UNDO tablespace is sized appropriately.

flashback_time

Instead of an SCN this parameter takes a time string to keep the data consistent to.

content

This switch determines if data and metadata (content=all; default), only the metadata (content=metadata_only) or only the data (content=data_only) shall be imported.

include

Explicitly include objects to be imported. For instance suppose you need to transfer only the triggers from schema TEST to schema TEST2 you could use:

bash# impdp system/manager logfile=data_pump_dir:import_test2.log network_link=source_link remap_schema=test:test2 schemas=test include=triggers

The list of available object to be included or excluded can be obtained from the following views:

  • dATABASE_EXPORT_OBJECTS for Full mode,
  • SCHEMA_EXPORT_OBJECTS for schema mode, and
  • TABLE_EXPORT_OBJECTS for table and tablespace mode.

exclude

This switch explicitly excludes objects from the import. Taken the example above support you want to transfer all the content from schema TEST to schema TEST2 but exclude all triggers:#

bash# impdp system/manager logfile=data_pump_dir:import_test2.log network_link=source_link remap_schema=test:test2 schemas=test exclude=triggers
This entry was posted in Oracle in general. Bookmark the permalink.

2 Responses to Using data pump to export and import data using the network mode feature

  1. kiman says:

    note that CONTENT=DATA is wrong. it should be CONTENT=DATA_ONLY.
    Thanks,
    Kiman

Leave a Reply

Your email address will not be published. Required fields are marked *