Transportable Tablespaces in Oracle
Transportable Tablespaces (TTS) is a powerful Oracle feature that allows you to move entire tablespaces (or sets of tablespaces) between databases by physically copying datafiles plus minimal metadata, instead of unloading and reloading all the data. Because you’re moving data files rather than rows, TTS operations can be vastly faster than conventional export/import.
In this article, we will:
- Walk through a full working example
- Explain each step and what the Oracle internals do
- Show cross-platform (endian) conversion cases
- Cover limitations, requirements, and gotchas
- Offer tips and best practices
Let’s begin.
Setup & Example Environment (Source Database)
First, we prepare a source database with a tablespace, a user, and a sample table that will live in that tablespace.
-- as SYS or privileged user
CREATE TABLESPACE test_data
DATAFILE '/u01/app/oracle/oradata/DB11G/test_data01.dbf'
SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE USER test_user IDENTIFIED BY test_user
DEFAULT TABLESPACE test_data
TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON test_data;
GRANT CREATE SESSION, CREATE TABLE TO test_user;
CONNECT test_user/test_user;
CREATE TABLE test_tab (
id NUMBER,
description VARCHAR2(50),
CONSTRAINT test_tab_pk PRIMARY KEY (id)
);
INSERT /*+ APPEND */ INTO test_tab (id, description)
SELECT level,
'Description for ' || level
FROM dual
CONNECT BY level <= 10000;
COMMIT;
- We created
TEST_DATAtablespace. - Created user
TEST_USERusing that tablespace. - Created a table
TEST_TABin that tablespace and loaded 10,000 rows.
At this point, the objects are in the TEST_DATA tablespace, and we are ready to transport it.
Step 1: Check Self-Containment (Transport Set Validation)
Before transporting any tablespace, Oracle requires that the set of tablespaces you pick is self-contained — meaning there are no dependencies crossing outside the set. For example, you should not have:
- Constraints to tables in other tablespaces
- Partitioned tables whose partitions span outside
- LOB segments in external tablespaces
- Foreign keys referencing tables outside the set
To perform the check:
EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(
ts_list => 'TEST_DATA',
incl_constraints => TRUE
);
If that succeeds (no errors), your set is valid for transport.
If there are violations, Oracle will report them (for example, constraints or referential integrity issues) and you must resolve them (disable, drop, or include the dependent tablespaces) before proceeding.
Step 2: Put Tablespace(s) in Read-Only Mode
Transportable tablespaces (in the normal mode) require the source tablespaces to be read-only during the transport process. This ensures stability of data during metadata export, and prevents changes while you copy the files.
ALTER TABLESPACE test_data READ ONLY;
If you need to transport multiple tablespaces, repeat for each.
Step 3: Export Metadata (Data Pump)
Because we are not exporting the data rows (we’re copying datafiles), what remains is transporting the metadata (DDLs, object definitions, constraints, indexes) required to plug those tablespaces into the target database.
Example:
expdp system/password DIRECTORY=dpump_dir \
DUMPFILE=tts_testdata_meta.dmp \
TRANSPORT_TABLESPACES=test_data
This Data Pump export will include only metadata of the objects in TEST_DATA, not the actual rows.
In the export log you will see:
- The list of datafiles that must be transported.
- The Data Pump dump file that contains the metadata.
- The sample import script (impscript.sql) that you will use at the target.
Step 4: Copy Datafiles + Dump File to Target
At the OS / storage level, copy:
- The datafiles of the tablespace(s) (e.g.
test_data01.dbf) - The Data Pump dump file
- The sample import script (if generated)
Make sure they land in a location the target database can access.
If source and target are on different platforms (different endian format), you must also perform datafile conversion. You can do this either:
- At the source side (before copying)
- At the target side (after copying)
You can use RMAN’s CONVERT command to convert the datafile endian format.
For example:
RMAN> CONVERT TARGET PLATFORM 'linux.x64' FROM PLATFORM 'solaris' DATAFILE '/path/test_data01.dbf' TO '/conv/test_data01_conv.dbf';
If no endian difference is present, you can skip conversion.
Step 5: (Optional) Switch Source Tablespace Back to Read/Write
If you want, you may switch the source tablespace back to read/write mode (if the downtime window allows it) so that operations can resume in source.
ALTER TABLESPACE test_data READ WRITE;
This is optional but often recommended to minimize source downtime.
Step 6: Import Metadata in Target Database
Now move to the target database and plug in the transported tablespace.
First, at the target, you might need to create the same tablespace names (or ensure the target can accept the transported datafiles). Depending on your setup, you may need to use CREATE TABLESPACE ... DATAFILE ... as placeholders or remap file names.
Then run:
impdp system/password DIRECTORY=dpump_dir \
DUMPFILE=tts_testdata_meta.dmp \
TRANSPORT_DATAFILES='/path/to/test_data01.dbf'
Here TRANSPORT_DATAFILES points to the physical datafiles you copied (or converted). The import utility reads the Data Pump metadata and attaches those datafiles to the target database.
If the sample script impscript.sql was generated, you can review or run that instead of crafting your own import commands.
After a successful import, the tablespace and its objects (tables, indexes, constraints) become part of the target database.
Cross-Platform / Endian & Compatibility Considerations
When transporting tablespaces across platforms (e.g. Solaris → Linux, or Big-Endian → Little-Endian), Oracle imposes certain rules:
- Use
V$TRANSPORTABLE_PLATFORMto list supported platforms and their endian formats. - If the endianness differs between source and target, use RMAN
CONVERTto adjust the datafile byte ordering. - Encrypted tablespaces or tables with encrypted columns cannot always be transported. If the target database does not have the same keystore or encryption key setup, you may not be able to plug the tablespace.
- The
SYSTEMandSYSAUXtablespaces cannot be transported. Objects such as PL/SQL packages, users, roles, sequences, etc. that reside in those tablespaces need manual handling or separate Data Pump export/import. - Character set compatibility: the source and target databases ideally share the same character set, or the target must be a superset. In some situations, limited conversions are allowed.
- Some features (XMLType, certain object types) might have extra constraints. For instance, when XMLType objects exist, you must use Data Pump metadata export and import.
- The minimum compatibility level of the target database must satisfy the transportable set’s computed requirement.
RMAN Transportable Tablespace (From Backup) — Alternative Method
In some cases, you may not want to make tablespaces read-only. Instead, you can use RMAN’s TRANSPORT TABLESPACE command to create a transportable set from backups. This allows the source database to remain open for writes while you prepare the transport.
Key features:
- RMAN starts an auxiliary instance to recover the datafiles to the correct SCN
- The auxiliary instance opens the tablespace in read-only mode and runs Data Pump export of metadata
- Supports specifying a point in time or SCN (you can transport as of a prior time)
- After operation, RMAN cleans up auxiliary instance files, leaving only the transportable set datafiles and dump file
- If endian conversion is needed, you still require
CONVERTeither before or after copying
Example snippet (in RMAN):
TRANSPORT TABLESPACE test_data
TABLESPACE DESTINATION '/tmp/transport'
AUXILIARY DESTINATION '/tmp/aux';
This is a powerful option when you want minimal source downtime and want to use backups rather than taking live datafiles offline.
Common Errors, Pitfalls & Troubleshooting Tips
- Transport set check fails
- You will get errors if constraints or dependencies cross outside your tablespace set. Resolve them (disable or include) before proceeding.
- Datafile names / paths mismatch
- When copying datafiles, ensure correct paths and naming on target. Use
DB_FILE_NAME_CONVERTor remap if necessary.
- When copying datafiles, ensure correct paths and naming on target. Use
- Endianness conversion errors
- If you transport between different endian platforms but miss conversion, your import will fail. Be sure to use RMAN
CONVERTwhere needed.
- If you transport between different endian platforms but miss conversion, your import will fail. Be sure to use RMAN
- Keystore / encryption mismatch
- If a tablespace is encrypted or contains encrypted columns, you must ensure the target database has a compatible keystore or skip transport and use Data Pump.
- Dumpfile or impscript file already exists
- If you re-run the export or transport, existing dump or script files may conflict. Delete or rename prior files.
- Invalid compatibility levels
- If the target database’s compatibility is lower than required by the transport set, import will fail. Always check compatibility.
- Objects in SYSTEM / SYSAUX
- These cannot be transported. Manually script and import these objects (users, packages etc.).
- XMLType complications
- With XMLType objects, you may need full Data Pump export/import, or use only Data Pump metadata, depending on version and restrictions.
- Tablespace Name Collisions
- If target already has a tablespace with the same name, you must take care to avoid clashes or use remapping.
Tips & Best Practices for Real Environments
- Always test in a staging environment before applying transport in production.
- Use incremental or partial transport when only part of data changes.
- When doing cross-platform migrations, plan endian conversion ahead.
- Keep an eye on storage paths, file permissions, and file ownership on target.
- Use parfiles or script templates to manage large numbers of files or tablespaces.
- After import, verify consistency: check constraints, indexes, object counts.
- Monitor performance—transporting very large tablespaces may take significant time depending on I/O and network.
- Document all steps, file copies, and scripts for auditability and repeatability.
No Comments