NOTE: The listed in this page reflect the raw data type of the converted column. Depending on the contents of the column, the Transformer Page may re-infer a different data type, when a dataset using this type of source is loaded.
When a Hive data type is imported, its JDBC data type is remapped according to the following table.
Tip: Data precision may be lost during conversion. You may want to generate min and max values and compute significant digits for values in your Hive tables and then compute the same in the .
|Source Data Type||Supported?||Notes|
NOTE: The may infer bigint columns containing very large or very small values as String data type.
NOTE: On import, some float columns may be interpreted as Integer data type in the . To fix, you can explicitly set the column's data type to Decimal in the Transformer page.
|uniontype||N|| || |
Create new table
NOTE: By default, the maximum length of values published to VARCHAR columns is 256 characters. As needed, this limit can be changed for multiple publication targets. For more information, see Configure Application Limits.
|Hive Data Type||Notes|
NOTE: The may infer Integer columns containing very large or very small values as String data type. Before you publish, you should verify that your columns containing extreme values are interpreted as Integer type. You can import a target schema to assist in lining up your columns with the expected target. For more information, see Overview of RapidTarget.
|Datetime||Timestamp/string (see Notes on Datetime columns below)||Target data type is based on the underlying data. Time zone information is retained.|
Append to existing table
If you are publishing to a pre-existing table, the following data type conversions apply:
- Rows: Target table data types
In any table cell, a
Y indicates that the append operation for that data type mapping is supported.
NOTE: You cannot append to Hive map and array column types from of Map and Array type, even if you imported data from this source.
| ||String||Integer||Datetime||Bool||Decimal||Map||Array||Out of Range error|
|INT|| ||Y|| || || || || ||NULL|
|BIGINT|| ||Y|| || || || || ||n/a|
|TINYINT|| || || || || || || ||NULL|
|SMALLINT|| || || || || || || ||NULL|
|DECIMAL|| ||Y|| || ||Y|| || ||NULL|
|DOUBLE|| ||Y|| || ||Y|| || ||n/a|
|FLOAT|| || || || ||Y || || ||NULL|
|TIMESTAMP|| || ||Y|| || || || || |
|BOOLEAN|| || || ||Y|| || || || |
Notes on Datetime columns
Columns in new tables created for output of
Datetime columns are written with the Hive
timestamp data type. These columns can be appended.
- Before release 4.2.1, Datetime columns were written to Hive as type String. Jobs that were created in these releases and that write to pre-existing tables continue to behave this way.
- A single job cannot write
Datetime values to one table as String type and to another table as Timestamp type. This type of job should be split into multiple types. The table schemas may require modification.
- When you export pre-generated results to Hive, all new tables created for Datetime column values continue to store String data type in Hive for Release 4.2.1. These columns can be appended with new String data.
- When you publish results from a job through the Publishing dialog to Hive, all Datetime column values are written as String type.
- If you are appending to a Timestamp column, the exported Datetime column must be in the following format: yyyy-MM-dd HH:mm:ss.xxxx