PROVE IT !!If you know it then PROVE IT !! Skill Proficiency Test

Teradata Columnar ? What is this !!!

Teradata 14.0 introduces Teradata Columnar – a new option to organize the data of a user-defined table or join index on disk.

Teradata Columnar offers the ability to partition a table or join index by column. It introduces column-storage as an alternative choice to row-storagefor a column partition and autocompression. Column partitioning can be used alone in a single-level partitioning definition or with row partitioning in a multilevel partitioning definition.

  • Teradata Columnar is a new paradigm for partitioning, storing data, and compression that changes the cost-benefit tradeoffs of the available physical database design choices and their combinations. Teradata Columnar provides a benefit to the user by reducing I/O for certain classes of queries while at the same time decreasing space usage

A column-partitioned (CP) table or join index has several key characteristics:

  • It does not have a primary index (a future blog entry will discuss why).
  • Each of its column partitions can be composed of a single column or multiple columns.
  • Each column partition usually contains multiple physical rows. Physical rows are the disk-based structures that the Teradata file system uses to store data in order based on the rowid associated with each physical row. The first part of a rowid indicates the partition number so that physical rows are ordered by partition number first and within partition by the rest of the rowid.
  • A new physical row format COLUMN may be utilized for a column partition; such a physical row is called a container. This is used to implement column-storage, row header compression, and autocompression for a column partition. This provides a compact way to store a series of column partition values.
  • Alternatively, a column partition may have physical rows with ROW format that are used to implement row-storage; such a physical row is called asubrow. Each column partition value is in its own physical row. Usually a subrow is wide (multicolumn, large character strings, etc.) where the row header overhead for each column partition value is insignificant and having each column partition value in its own physical row provides more direct access to the value.
  • A CP table is just another type of table that can be accessed by a query. A single query can access multiple kinds of tables.
  • PPI stands for partitioned primary index which means the table has a primary index and the rows are partitioned on the AMPs (and within a partition, the rows are ordered by a hash of the primary index columns).
  • A CP table is not a PPI table since a CP table doesn't have a primary index. But a CP table can have RANGE_N and CASE_N row partitioning (the kind of partitioning of rows that is used in PPI) but since there is no primary index, the rows within a row partition are not ordered by a hash of some columns of each row — they are just in insert order.
  • A CP table could have a join index on it where the join index does have primary index (but not column partitioning).
  • A PI or PPI table could have a join index on it where the join index has column partitioning (but not a primary index) plus optionally 1 or more levels of row partitioning.
  • A NoPI table can't have row partitioning unless it also has column partitioning.

For example:

TxnDate DATE,
Quantity INTEGER )

This creates a column-partitioned (CP) table that partitions the data of the table vertically. Each column is in its own column partition that is stored using column-storage with row header compression and autocompression. All the data for TxnNo comes first, followed by the data for TxnDate, followed by the data for ItemNo, and then the data for Quantity. Note that a primary index is not specified so this is NoPI table. Moreover, a primary index must not be specified if the table is column partitioned.

The following adds a level of row partitioning (so the table has multilevel partitioning). All the data for TxnNo for the first day comes first, followed by the next day of data for TxnNo, etc. then all the data for TxnDate for the first day, the second day, etc, ending with the last day of data for Quantity.

TxnDate DATE,
Quantity INTEGER )
DATE '2011-01-01' AND DATE '2011-12-31' EACH INTERVAL '1' DAY) ),

Insight's or Real time scenario

  • In a NoPI table in general you can update rows in place and Teradata will do normal updates to the rows. But single row updates have a significant cost because we have to scan to find the row. Thus you probably need an index on the NoPI table if you are going to update single rows (and it will still be more expensive than PI update of course).
  • In Teradata Columnar updated rows will be added to the end of the table, the old copy of the row will be logically deleted. Deleted rows will be logically deleted. This means that if you do a lot of updating or deleting, over time there will be cause to “garbage collect” the table (in Teradata via INSERT SELECT). If it is an INSERT only table, then that will not be true. The logical delete and append updated rows model is chosen as a tradeoff vs touching all the columns and making the updates/deletes very expensive from a write perspective – a similar tradeoff to what other columnar vendors do.
  • On decompression: Teradata block compression decompresses on read materializing the uncompressed rows in memory. But none of the other Teradata compression forms do so including the compression forms for Teradata Columnar. The optimizer makes a query plan which first will touch only the containers needed by the query and we do not decompress those containers to process the query. Thus we get maximum leverage from the compression in terms of saving IO and memory in cache. The only exception to this is if block level compression is used on top of the automatic columnar compression – in that case, the block decompression is performed but the columnar compression remains on the in-memory copy.
  • It is mentioned that Teradata Columnar can store part of the columns in row form and part of them in columnar form. The example that is used are parts of an address as a row. It is also possible to group together less related columns into row form, we expect a common use case to be to group together high frequency of use columns into row form and use columnar for lower frequency of access columns. This allows the user to trade some amount of compression for higher efficiency access to the subset of columns that are accessed most frequently.
  • One feature that was missed in the above discussion is that Teradata Columnar can be combined with PPI/MLPPI to have horizontal partitioning and column partitioning simultaneously. This allows Teradata to touch only a small portion of the column containers for a referenced column when an appropriate partitioning expression is available.