Exadata Hybrid Columnar Compression (EHCC) FAQ

What is Exadata Hybrid Columnar Compression (EHCC)

Exadata Hybrid Columnar Compression (EHCC), also known as Hybrid Columnar Compression (HCC), is data that is organized by a hybrid of columns/rows and compression rather than organized by basic row format.

What are the types of compression available with EHCC ?

EHCC is available with following types:

1. Warehouse Compression
Within warehouse compression there are two subtypes:
a. Query HIGH
b. Query LOW

2. Archive Compression
Within archive compression there are two subtypes:
a. Archive HIGH
b. Archive LOW

Can CTAS be used to create EHCC tables ?

Yes, CTAS can be used to create EHCC tables.
Example :

SQL> create table example compress for query high as select * from example2;

In this example “compress for query high” is specific to EHCC that will create table with compression enabled.

How to convert existing / non-EHCC table to use EHCC ?

Existing / non-EHCC tables can be converted to EHCC using alter table commands as below.

SQL > alter table example move compress for query high; 
SQL > alter table example compress for query high;

In first case as move is specifed, existing data will be moved to new compression format.
In second case, new data loaded in existing table will get EHCC compression keeping existing data as it is.

You can also specify different compressions types for each partition in an already existing table. For example: ALTER TABLE <TABLENAME> MODIFY PARTITION <PARTITION NAME> compress for query high;

How to disable EHCC compression on a table ?

To disable or uncompress EHCC table use alert table command as below:

SQL> alter table example nocompress; 
SQL> alter table example move nocompress;

Can we use DBMS_REDEFINITION online redefinition package for EHCC tables ?

Yes. DBMS_REDEFINITION does support EHCC tables and can be used to move data in online mode.

How does update works for EHCC table ?

When a row is updated in EHCC table, in the background we do a delete followed by insert of the row.
While doing so there are two major changes happens at row level:
1. ROWID changes for updated row as we have done a delete followed by insert.
2. Updated row comes out of the EHCC compression and is stored into OLTP compression format.

What is Compression Unit?

Hybrid Columnar Compression uses a logical construct called a compression unit (CU) to store a set of rows. When you load data into a table, the database stores groups of rows in columnar format, with the values for each column stored and compressed together. After the database has compressed the column data for a set of rows, the database fits the data into the compression unit.

How to check compression format for a particular row ?

Use dbms_compression.get_compression_type() to get the compression type at row level.

SQL> desc dbms_compression
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------

Returns the compression type for the input rowid. The constants defined for compression type :

Example :


How to get estimate of compression ratios for EHCC compression types ?

Use dbms_compression.get_compression_ratio().

desc dbms_compression
Argument Name                  Type                    In/Out Default?
------------------------------ ----------------------- ------ --------
SCRATCHTBSNAME                 VARCHAR2                IN
OWNNAME                        VARCHAR2                IN
TABNAME                        VARCHAR2                IN
PARTNAME                       VARCHAR2                IN
COMPTYPE                       NUMBER                  IN
BLKCNT_CMP                     BINARY_INTEGER          OUT
BLKCNT_UNCMP                   BINARY_INTEGER          OUT
ROW_CMP                        BINARY_INTEGER          OUT
ROW_UNCMP                      BINARY_INTEGER          OUT
CMP_RATIO                      NUMBER                  OUT
COMPTYPE_STR                   VARCHAR2                OUT
SUBSET_NUMROWS                 NUMBER                  IN     DEFAULT

How to enable tracing for EHCC ?

Following events can be used to enable tracing for EHCC.

event=”trace[ADVCMP_COMP] disk low“  (Tracing for EHCC loading)
event=”trace[ADVCMP_DECOMP] disk low“ (Tracing for EHCC queries)
event=”trace[ADVCMP_MAIN] disk low“  (Tracing for everything)

Where level can be one of the following:
lowest, low, medium, high, highest

What storage types are supported for EHCC or HCC ?

Apart from Exadata, HCC is supported on Pillar Axiom and Sun ZFS Storage Appliance (ZFSSA).

Can we specify different compression type for each partition in a table ?

Yes. Compression can be specified at the level of a segment and hence EHCC can be specified at partition level.
For Example:

SQL> create table orders (cid, pid, sid, price, discount, odate)
partition by range (cid)
partition p1 values less than (100000) nocompress,
partition p2 values less than (200000) compress for archive low,
partition p3 values less than (300000) compress for query high,
partition p4 values less than (maxvalue) compress for query low)
as select * from prev_orders;

Does EHCC support function based indexes ?

Yes, all index types are supported including function based indexes.

Is EHCC suitable for single row access methods ?

Yes, EHCC compression type like Query HIGH and Query LOW are optimized for query on single row.
Single row access might be expensive for Archive HIGH and Archive LOW as compared to Query compression but still it can be used.

Does EHCC uncompress CUs to access the data? Will this result in multiple I/Os for a single row access ?

Yes, to access a row we read the entire CU that could be comprised of multiple blocks.
This decompression happens at cell side so we use the CPU and memory available at Cell side.
Having said this, EHCC query architecture is optimized to increase scan query performance by taking advantage of fewer number of blocks reads.

Can we use EHCC on normal server with SUN ZFS filesystem ?

Yes, EHCC can be used with any server with storage as ZFS.
While performing the compression, it checks for certain libraries and if those libraries fall into supported storage type which is Exadata, SUN ZFSSA and Pillar, EHCC compression is achieved.

What are supported RDBMS version for EHCC ?

All 11.2.X.X and 12c RDBMS version supports EHCC.

If table is already OLTP compression on, will I be able to enable EHCC on the same table ?

Yes, It can be converted to EHCC compression.
Use “alter table [table name] move compress for [compression type]” to make all existing rows to move to EHCC”.
But at any time you can have only one compression type enabled at the table level.

Does EHCC supports BLOB data type ?

EHCC does not support BLOB. As BLOB is unstructured data, have to use secure files instead.

Does EHCC supports LONG data type ?

No, LONG datatype is not supported for EHCC.

Can both EHCC and OLTP compression will be on for a single table ?

Only one compression type can be enabled at table level. In case EHCC is enabled on table level and there are single row updates on the table then those updated rows goes into OLTP compressed blocks. With this we can have mix of EHCC compressed and OLTP compressed rows in a table.

In a warehouse with rows being added to an EHCC table via ETL, can the new rows be converted to EHCC, or the whole table to be periodically recompressed ?

If ETL is loading with direct load, the new rows will be added to EHCC compression else it will go to OLTP compression.
In case new rows goes to OLTP compression, you will have to do “alter table move” to move the data to EHCC compression periodically.

Are there any application code changes if we convert tables to EHCC ?

No. there are no application level code changes required to read or update data in EHCC tables.

Can we join a regular table with EHCC table in a query ?

Yes, we can join between EHCC and non EHCC table.

Which view to check the compression type ?

Use dba_tables to check the compression type.
Example :

SQL> Select owner, table_name, compress_for from sys.dba_tables where compression = 'ENABLED' order by 1,2;

OWNER                          TABLE_NAME                     COMPRESS_FOR
------------------------------ ------------------------------ ------------
INT_AGIIE                      I_POSITIONS                    BASIC
INT_AGIIE                      X_INT_AGIIE_STATUS             BASIC
SCOTT                          AK                             QUERY HIGH
SCOTT                          EHCC                           QUERY HIGH
SCOTT                          EHCC_W                         QUERY LOW
SCOTT                          EXAMPLE                        ARCHIVE LOW
SCOTT                          HCCTEST                        QUERY LOW
SCOTT                          OAVAL                          QUERY HIGH
SCOTT                          TAB1                           QUERY LOW
SYS                            AJAY                           QUERY LOW
SYS                            EHCC1                          QUERY HIGH

For partition tables, use DBA_TAB_PARTITOONS as well, since compression can be different at the partition level.

What’s the difference between query low and query high ?

Query High gives around 10X compression ratio where as Query Low gives around 6X compression ratio.
Query performance is better in case of Query High as compared to query Low.
Load time is better in case of Query Low as compared to Query High.

If one table is marked as query high, can it be moved to archive high or any other EHCC method directly ?

Yes, this can be done using “alter table” command. If move is specified with alter table command, existing data will be moved to new compression format where as if move is no specified, new compression will be effective from subsequent loads.

What is the impact on performance if taking a table out of EHCC? specially if the table is large ?

To take a table out of EHCC you need to use “alter table .. move nocompress”. When you do this, the table is
locked and offline for DML. To do this online, one can use online redefinition package (DBMS_REDEFINITION). Apart from this there is no other performance impact.
If order to improve performance of this activity one should run alter table command in parallel.

In EHCC we may have a table to have mix of HCC and OLTP compression. Does this create extra overhead with fetching data when range scan is used ?

Yes, it can cause extra over head if data is spread across multiple compression blocks.
To avoid this, one should move the table to EHCC compression type regularly to move the OLTP compressed rows to EHCC.

Is the row migrated out of CU when update occurs causing row migration ?

Yes, row goes out of CU and is stored in OLTP compressed block.

Harshad Vengurlekar

Experienced OCM-certified Oracle Database Administrator with over 18 years of expertise in designing, implementing, and managing complex database solutions. My expertise spans performance optimization, security, and high-stakes solution implementation. Adept at managing complex environments with precision.

No Comments

    Leave a Message

    Your email address will not be published. All fields are mandatory. **