Transportable Tablespaces – Step-by-Step Guide
Transportable Tablespaces were first introduced in Oracle 8i to allow entire tablespaces to be copied between databases in the time it takes to copy the datafiles.
In Oracle 8i, the major restriction was that both source and target databases had to use the same block size. With the introduction of multiple block sizes in Oracle 9i, that restriction was removed.
This guide demonstrates a simple, practical example of how to transport a tablespace between two Oracle databases.
1. Setup
For this demonstration, we’ll create a new tablespace, user, and table in the source database.
CONN / AS SYSDBA
CREATE TABLESPACE test_data
DATAFILE '/u01/app/oracle/oradata/DB11G/test_data01.dbf'
SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE USER test_user IDENTIFIED BY test_user
DEFAULT TABLESPACE test_data
TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON test_data;
GRANT CREATE SESSION, CREATE TABLE TO test_user;
CONN test_user/test_user
CREATE TABLE test_tab (
id NUMBER,
description VARCHAR2(50),
CONSTRAINT test_tab_pk PRIMARY KEY (id)
);
INSERT /*+ APPEND */ INTO test_tab (id, description)
SELECT level,
'Description for ' || level
FROM dual
CONNECT BY level <= 10000;
COMMIT;
2. Source Database Preparation
Before transporting, the tablespace must be self-contained. Oracle provides the procedure DBMS_TTS.TRANSPORT_SET_CHECK to validate this.
CONN / AS SYSDBA
EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'TEST_DATA', incl_constraints => TRUE);
If successful, check for violations:
SELECT * FROM transport_set_violations;
Expected output:
no rows selected
If no violations appear, proceed to make the tablespace read-only.
ALTER TABLESPACE test_data READ ONLY;
3. Export the Tablespace Metadata
Export the metadata of the tablespace using Data Pump (expdp).
If you are using Oracle 10g or later, use the expdp utility. For earlier versions, use exp.
Create a directory object and grant permissions:
CONN / AS SYSDBA
CREATE OR REPLACE DIRECTORY temp_dir AS '/tmp/';
GRANT READ, WRITE ON DIRECTORY temp_dir TO system;
Run Data Pump Export:
$ expdp userid=system/password directory=temp_dir transport_tablespaces=test_data dumpfile=test_data.dmp logfile=test_data_exp.log
If using a version prior to 10g:
$ exp userid='system/password as sysdba' transport_tablespace=y tablespaces=test_data file=test_data.dmp log=test_data_exp.log
Copy Files to Destination
Use SCP or binary FTP to copy:
- The datafile
/u01/app/oracle/oradata/DB11G/test_data01.dbf - The dump file
/tmp/test_data.dmp
to the destination server.
4. Return Source Tablespace to Read/Write Mode
Once export and copy are complete:
ALTER TABLESPACE test_data READ WRITE;
5. Destination Database Setup
At the destination, create the same user that owns objects within the transported tablespace (if it doesn’t already exist).
CONN / AS SYSDBA
CREATE USER test_user IDENTIFIED BY test_user;
GRANT CREATE SESSION, CREATE TABLE TO test_user;
6. Import the Tablespace Metadata
Just like with export, if you’re using Oracle 10g or above, use Data Pump Import (impdp).
Create the Directory Object:
CONN / AS SYSDBA
CREATE OR REPLACE DIRECTORY temp_dir AS '/tmp/';
GRANT READ, WRITE ON DIRECTORY temp_dir TO system;
Run Data Pump Import:
$ impdp userid=system/password directory=temp_dir dumpfile=test_data.dmp logfile=test_data_imp.log transport_datafiles='/u01/app/oracle/oradata/DB11GB/test_data01.dbf'
For pre-10g versions:
$ imp userid='system/password as sysdba' transport_tablespace=y datafiles='/u01/app/oracle/oradata/DB11GB/test_data01.dbf' tablespaces=test_data file=test_data.dmp log=test_data_imp.log
7. Switch Tablespace to Read/Write Mode
After import completes:
ALTER TABLESPACE test_data READ WRITE;
8. Verification
Verify that the tablespace has been successfully plugged in and is available.
SELECT tablespace_name, plugged_in, status
FROM dba_tablespaces
WHERE tablespace_name = 'TEST_DATA';
Expected output:
TABLESPACE_NAME PLU STATUS
------------------------------ --- ---------
TEST_DATA YES ONLINE
Conclusion
You have now successfully transported a tablespace from one database to another.
This method is significantly faster than traditional export/import since it only involves copying the datafiles and transporting metadata, rather than physically unloading and reloading table data.
Key points to remember:
- Always verify self-containment using
DBMS_TTS.TRANSPORT_SET_CHECK. - Ensure tablespaces are set to READ ONLY before export.
- Copy both datafiles and dump files securely.
- Restore READ WRITE mode once the transport process is complete.
1 Comment
Hi there i am kavin, its my first occasion to commenting anyplace, when i read this paragraph i thought i could also make comment due to this
good piece of writing.