Skip to content

How to Import an Oracle Schema with Data Pump

How to Import an Oracle Schema with Data Pump

Section titled “How to Import an Oracle Schema with Data Pump”

Oracle Data Pump (impdp) imports logical exports created by expdp. Beyond a straight restore, it supports remapping schemas and tablespaces — making it the primary tool for database migrations, cloning environments, and selective data recovery. This guide covers all common import scenarios.

You need a dump file created by expdp and a directory object pointing to the location containing that file.

-- Verify the directory object exists and points to the dump location
SELECT directory_name, directory_path
FROM dba_directories
WHERE directory_name = 'DP_DIR';
-- Grant access if importing as a non-DBA user
GRANT READ, WRITE ON DIRECTORY dp_dir TO system;
-- Check the target tablespace exists and has space
SELECT tablespace_name, status,
ROUND(SUM(bytes)/1024/1024/1024, 2) AS total_gb
FROM dba_data_files
GROUP BY tablespace_name
ORDER BY tablespace_name;

The importing user typically needs IMP_FULL_DATABASE for full database imports, or object-level privileges for schema imports.

Terminal window
# Import a schema directly (same schema name, same tablespace names)
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_imp.log \
PARALLEL=4

Step 2: Remap Schema (Import to Different Schema Name)

Section titled “Step 2: Remap Schema (Import to Different Schema Name)”

This is the most common migration pattern — import source schema as a different name on the target.

Terminal window
# Remap source schema to a different target schema
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_imp.log \
REMAP_SCHEMA=app_schema:app_schema_dev \
PARALLEL=4
# Multiple schema remaps in one import
impdp system/manager@ORCL \
FULL=Y \
DIRECTORY=dp_dir \
DUMPFILE=full_db_%U.dmp \
LOGFILE=full_db_imp.log \
REMAP_SCHEMA=prod_schema1:dev_schema1 \
REMAP_SCHEMA=prod_schema2:dev_schema2 \
PARALLEL=8

When the target database has different tablespace names, use REMAP_TABLESPACE.

Terminal window
# Remap all objects from source tablespace to target tablespace
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_imp.log \
REMAP_TABLESPACE=prod_data:dev_data \
REMAP_TABLESPACE=prod_idx:dev_idx \
PARALLEL=4

When objects already exist in the target, control how conflicts are resolved with TABLE_EXISTS_ACTION.

Terminal window
# SKIP — leave existing objects unchanged (default)
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_imp.log \
TABLE_EXISTS_ACTION=SKIP
# REPLACE — drop and recreate existing tables with imported data
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_imp.log \
TABLE_EXISTS_ACTION=REPLACE
# APPEND — keep existing rows, add imported rows
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_imp.log \
TABLE_EXISTS_ACTION=APPEND
# TRUNCATE — delete existing rows, then import
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_imp.log \
TABLE_EXISTS_ACTION=TRUNCATE
Terminal window
# Import only specific tables from a full or schema dump
impdp system/manager@ORCL \
TABLES=app_schema.orders,app_schema.customers \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=tables_imp.log \
TABLE_EXISTS_ACTION=REPLACE

Step 6: Extract DDL Without Importing Data

Section titled “Step 6: Extract DDL Without Importing Data”

Use SQLFILE to generate the DDL statements without loading any data — ideal for reviewing what will be imported.

Terminal window
# Extract DDL only to a SQL file
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
SQLFILE=dp_dir:app_schema_ddl.sql
# The SQLFILE output can be reviewed and run manually
# cat /u01/datapump/exports/app_schema_ddl.sql
Terminal window
# Full database import to a different server/database
impdp system/manager@TARGET_DB \
FULL=Y \
DIRECTORY=dp_dir \
DUMPFILE=full_db_%U.dmp \
LOGFILE=full_db_imp.log \
REMAP_TABLESPACE=prod_data:dev_data \
REMAP_TABLESPACE=prod_idx:dev_idx \
EXCLUDE=STATISTICS \
PARALLEL=8
Terminal window
# Import structure only — tables, indexes, procedures, views, no rows
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_meta_imp.log \
CONTENT=METADATA_ONLY \
PARALLEL=4
Terminal window
# Import rows into existing tables, skip all DDL
impdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_data_imp.log \
CONTENT=DATA_ONLY \
TABLE_EXISTS_ACTION=APPEND \
PARALLEL=4
Terminal window
# Import only rows matching a condition
impdp system/manager@ORCL \
TABLES=app_schema.orders \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=orders_imp.log \
QUERY=app_schema.orders:'"WHERE status = '"'"'COMPLETED'"'"'"'

Network mode imports directly from a source database over a database link — no dump file needed.

-- Create a database link to the source (on target database)
CREATE DATABASE LINK source_db
CONNECT TO system IDENTIFIED BY password
USING 'SOURCE_TNSNAME';
Terminal window
# Import directly from source database via database link
impdp system/manager@TARGET_DB \
SCHEMAS=app_schema \
NETWORK_LINK=source_db \
REMAP_SCHEMA=app_schema:app_schema_copy \
LOGFILE=dp_dir:network_import.log \
PARALLEL=4

Importing into the wrong container — In a CDB, always specify the PDB in the connect string: impdp system@pdb_dev, not impdp system@cdb_root.

Missing user or tablespace on target — If the schema or tablespace in the dump does not exist on target, import fails. Create the user and tablespace first, or use REMAP_SCHEMA and REMAP_TABLESPACE.

Not excluding statistics — Importing stale production statistics into a development database causes the optimizer to make poor choices. Use EXCLUDE=STATISTICS and gather fresh stats.

Forgetting TABLE_EXISTS_ACTION — The default is SKIP, which silently ignores tables that already exist. If you expect to replace data, you must explicitly set REPLACE, TRUNCATE, or APPEND.

Import fails on constraint violations with APPEND — When appending rows, foreign key constraints may fail if parent rows do not exist. Import in dependency order or defer constraints temporarily.

Not checking the log file — Data Pump considers an import successful even if some objects failed. Always review the log file for ORA- errors.

-- Check the import job completed (look for "successfully completed")
-- grep -i 'completed\|error\|ORA-' /u01/datapump/exports/app_schema_imp.log
-- Verify tables were imported
SELECT table_name, num_rows, last_analyzed, status
FROM dba_tables
WHERE owner = 'APP_SCHEMA'
ORDER BY table_name;
-- Verify row counts match expectations
SELECT table_name,
TO_NUMBER(EXTRACTVALUE(DBMS_XMLGEN.GETXMLTYPE(
'SELECT COUNT(*) c FROM app_schema.' || table_name), '//c')) AS row_count
FROM dba_tables
WHERE owner = 'APP_SCHEMA'
AND num_rows > 0
ORDER BY table_name;
-- Check for invalid objects after import
SELECT object_name, object_type, status
FROM dba_objects
WHERE owner = 'APP_SCHEMA'
AND status = 'INVALID'
ORDER BY object_type, object_name;
-- Recompile any invalid objects
EXEC DBMS_UTILITY.COMPILE_SCHEMA(schema => 'APP_SCHEMA', compile_all => FALSE);