NOTE: The Trifacta® data types listed in this page reflect the raw data type of the converted column. Depending on the contents of the column, the Transformer Page may re-infer a different data type, when a dataset using this type of source is loaded.
When a Hive data type is imported, its JDBC data type is remapped according to the following table.
Tip: Data precision may be lost during conversion. You may want to generate min and max values and compute significant digits for values in your Hive tables and then compute the same in the Trifacta application.
|Source Data Type||Supported?|
Trifacta Data Type
NOTE: The Trifacta platform may infer bigint columns containing very large or very small values as String data type.If needed, you can disable type inference for individual schematized sources. For more information, see Import Data Page.
NOTE: On import, some float columns may be interpreted as Integer data type in the Trifacta platform. To fix, you can explicitly set the column's data type to Decimal in the Transformer page.
- After a dataset has been imported using custom SQL from Hive, disabling type inference may not revert to the source data types for some columns. The workaround is to create a new imported dataset using the same custom SQL with type inference disabled before import. After the dataset is created, use it as a replacement for the corrupted instances of the previous Hive dataset.
Create new table
NOTE: By default, the maximum length of values published to VARCHAR columns is 256 characters. As needed, this limit can be changed for multiple publication targets. For more information, see Configure Application Limits.
Trifacta Data Type
|Hive Data Type||Notes|
NOTE: The Trifacta platform may infer Integer columns containing very large or very small values as String data type. Before you publish, you should verify that your columns containing extreme values are interpreted as Integer type.You can import a target schema to assist in lining up your columns with the expected target. For more information, see Overview of RapidTarget.
|Datetime||Timestamp/string (see Notes on Datetime columns below)||Target data type is based on the underlying data. Time zone information is retained.|
Append to existing table
If you are publishing to a pre-existing table, the following data type conversions apply:
- Columns: Trifacta data types
- Rows: Target table data types
In any table cell, a
Y indicates that the append operation for that data type mapping is supported.
NOTE: You cannot append to Hive map and array column types from Trifacta columns of Map and Array type, even if you imported data from this source.
|String||Integer||Datetime||Bool||Float||Map||Array||Out of Range error|
Notes on Datetime columns
Columns in new tables created for output of
Datetime columns are written with the Hive
timestamp data type. These columns can be appended.
- Before release 4.2.1, Datetime columns were written to Hive as type String. Jobs that were created in these releases and that write to pre-existing tables continue to behave this way.
- A single job cannot write
Datetimevalues to one table as String type and to another table as Timestamp type. This type of job should be split into multiple types. The table schemas may require modification.
The above issue may appear as the following error when executing the job:
- When you export pre-generated results to Hive, all new tables created for Datetime column values continue to store String data type in Hive for Release 4.2.1. These columns can be appended with new String data.
- When you publish results from a job through the Publishing dialog to Hive, all Datetime column values are written as String type.
- If you are appending to a Timestamp column, the exported Datetime column must be in the following format: yyyy-MM-dd HH:mm:ss.xxxx
This page has no comments.