Skip to content

How to Export an Oracle Schema with Data Pump

How to Export an Oracle Schema with Data Pump

Section titled “How to Export an Oracle Schema with Data Pump”

Oracle Data Pump (expdp) is the standard tool for logical backups and migrations. It runs server-side, is significantly faster than the legacy exp utility, and supports parallelism, compression, and fine-grained filtering. This guide covers all common export scenarios.

Data Pump requires a directory object — an Oracle object pointing to a filesystem path the database server can write to. You cannot point expdp directly at a path without one.

-- Check existing directory objects
SELECT directory_name, directory_path
FROM dba_directories
ORDER BY directory_name;
-- Create a directory object (as DBA, run as SYSDBA or with CREATE ANY DIRECTORY)
CREATE OR REPLACE DIRECTORY dp_dir AS '/u01/datapump/exports';
-- Grant access to the exporting user
GRANT READ, WRITE ON DIRECTORY dp_dir TO app_schema;
GRANT READ, WRITE ON DIRECTORY dp_dir TO system;
-- Verify the OS path exists and is writable by the oracle OS user
-- Run from OS: ls -ld /u01/datapump/exports

The user running expdp needs the EXP_FULL_DATABASE role for full database exports, or ownership/SELECT privileges for schema exports.

Run expdp from the OS command line as the oracle user (or any user with EXP_FULL_DATABASE).

Terminal window
# Export a single schema
expdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_%U.dmp \
LOGFILE=app_schema_exp.log \
PARALLEL=4
# The %U in DUMPFILE creates numbered files: app_schema_01.dmp, app_schema_02.dmp...
# PARALLEL=4 writes to 4 files simultaneously — faster for large schemas
Terminal window
# Export multiple schemas in one job
expdp system/manager@ORCL \
SCHEMAS=app_schema,ref_schema,audit_schema \
DIRECTORY=dp_dir \
DUMPFILE=multi_schema_%U.dmp \
LOGFILE=multi_schema_exp.log \
PARALLEL=4 \
COMPRESSION=ALL
Terminal window
# Full database export (requires EXP_FULL_DATABASE role)
expdp system/manager@ORCL \
FULL=Y \
DIRECTORY=dp_dir \
DUMPFILE=full_db_%U.dmp \
LOGFILE=full_db_exp.log \
PARALLEL=8 \
COMPRESSION=ALL \
METRICS=YES
Terminal window
# Export specific tables only
expdp system/manager@ORCL \
TABLES=app_schema.orders,app_schema.customers,app_schema.products \
DIRECTORY=dp_dir \
DUMPFILE=tables_exp_%U.dmp \
LOGFILE=tables_exp.log \
PARALLEL=2

Compression reduces dump file size by 60-80%. Requires the Advanced Compression Option license.

Terminal window
# Compress data and metadata
expdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_comp_%U.dmp \
LOGFILE=app_schema_comp_exp.log \
COMPRESSION=ALL \
COMPRESSION_ALGORITHM=MEDIUM \
PARALLEL=4

Without the Advanced Compression license, use COMPRESSION=METADATA_ONLY (metadata is always free to compress).

Step 6: Export with Filtering and Exclusions

Section titled “Step 6: Export with Filtering and Exclusions”
Terminal window
# Exclude specific object types (e.g., statistics, grants)
expdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_nostat_%U.dmp \
LOGFILE=app_schema_nostat_exp.log \
EXCLUDE=STATISTICS \
EXCLUDE=GRANT \
PARALLEL=4
# Export only rows matching a condition (table-level filter)
expdp system/manager@ORCL \
TABLES=app_schema.orders \
DIRECTORY=dp_dir \
DUMPFILE=orders_2024_%U.dmp \
LOGFILE=orders_2024_exp.log \
QUERY=app_schema.orders:'"WHERE order_date >= DATE '"'"'2024-01-01'"'"'"'
# Export only specific object types (e.g., just procedures and packages)
expdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_code.dmp \
LOGFILE=app_schema_code_exp.log \
INCLUDE=PROCEDURE,PACKAGE,FUNCTION,TRIGGER,VIEW

For a consistent snapshot across a long-running export, use FLASHBACK_SCN or FLASHBACK_TIME.

-- Get current SCN before starting
SELECT current_scn FROM v$database;
-- Output: 12345678
Terminal window
# Export consistent to a specific SCN
expdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_consistent.dmp \
LOGFILE=app_schema_consistent_exp.log \
FLASHBACK_SCN=12345678
# Or use FLASHBACK_TIME for time-based consistency
expdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
DUMPFILE=app_schema_time.dmp \
FLASHBACK_TIME='"TO_TIMESTAMP('"'"'2024-03-15 02:00:00'"'"', '"'"'YYYY-MM-DD HH24:MI:SS'"'"')"'
Terminal window
# Estimate size without actually exporting
expdp system/manager@ORCL \
SCHEMAS=app_schema \
DIRECTORY=dp_dir \
ESTIMATE_ONLY=YES \
PARALLEL=4
-- From another session, monitor the export job
SELECT job_name, state, degree,
bytes_processed/1024/1024/1024 AS gb_processed,
total_items, processed_items
FROM dba_datapump_jobs j
JOIN v$session_longops lo ON j.job_name = lo.message
WHERE state = 'EXECUTING';
-- Attach to a running job to check/control it
-- From OS: expdp system/manager@ORCL ATTACH=SYS_EXPORT_SCHEMA_01
-- Then at the Export> prompt: STATUS or KILL_JOB

Missing directory objectexpdp will fail immediately if the DIRECTORY object does not exist or the user has no READ/WRITE grant on it.

Insufficient disk space — Uncompressed Data Pump dumps are often larger than the logical data size. Estimate with ESTIMATE_ONLY=YES first, and ensure 2x space is available.

Exporting from wrong container — In a CDB, connect to the correct PDB: expdp system@pdb_prod not expdp system@cdb.

Using exp instead of expdp — The legacy exp utility is deprecated. It cannot export tables with XMLTYPE, LOB transformations, or objects from Oracle 11g+ features. Always use expdp.

Not testing the dump file — A dump file that was interrupted or partially written may appear complete. Always run impdp SQLFILE=test.sql to verify the dump can be read.

Exporting without PARALLEL — Single-threaded exports of multi-gigabyte schemas are slow. Use PARALLEL equal to half the available CPU cores.

-- Check the Data Pump job completed successfully
SELECT job_name, operation, job_mode, state,
degree, attached_sessions
FROM dba_datapump_jobs
ORDER BY job_name DESC
FETCH FIRST 10 ROWS ONLY;
-- Check the log file for errors (run from OS)
-- grep -i error /u01/datapump/exports/app_schema_exp.log
-- grep -i ORA- /u01/datapump/exports/app_schema_exp.log
-- Verify dump file exists and has reasonable size (run from OS)
-- ls -lh /u01/datapump/exports/app_schema_*.dmp
-- Quick dump file validation (metadata check only, no data loaded)
-- impdp system/manager@ORCL \
-- DIRECTORY=dp_dir \
-- DUMPFILE=app_schema_01.dmp \
-- SQLFILE=dp_dir:app_schema_ddl.sql