This section covers key known limitations of .
NOTE: This list of limitations should not be considered complete.
Sample sizes are defined by parameter for each available running environment. See Sample Size Limits below.
valuestocols, will still be configured according to sample data at the time of that the step was added, instead at execution time. For example, all of the values detected in the sample are used to determine the columns of a
valuestocolstransform step based on the selected sample when the step was added.
The product supports a variety of global file encoding types for import. For more information, see Configure Global File Encoding Type.
States and Zip Code Column Types and the corresponding maps in visual profiling apply only to the United States.
UTF-32 encoding is not supported
NOTE: Some functions do not correctly account for multi-byte characters. Multi-byte metadata values may not be consistently managed.
Defaults for each running environment:
Execution on a Hadoop running environment is recommended for any files over 5GB in net data size, including join keys.
The product requires definition of a base storage layer, which can be HDFS or S3 for this version. This base storage layer must be defined during install and cannot be changed after installation. See Set Base Storage Layer.
High availability for Hive is not supported.
NOTE: High Availability for Hive is automatically supported when the product is installed on Azure and integrated with an HDI cluster. See Configure for Azure.