|D s product|
Tip: You may import multiple worksheets from a single workbook at one time. See Import Excel Data in the User Guide.
JSON, including nested
requires that JSON files be submitted with one valid JSON object per line. Consistently malformed JSON objects or objects that overlap linebreaks might cause import to fail. See Initial Parsing Steps in the User Guide
D s product
- Plain Text
NOTE: When working with datasets sourced from Parquet files, lineage information and the
$sourcerownumberreference are not supported.
NOTETip: XML files can be ingested as unstructured text. XML support is not enabled by default. For more information, please contact
D s proserv
For more information on data is handled initially, see Initial Parsing Steps in the User Guide.
and Spark running environments apply Snappy compression to this format.
D s photon
NOTE: Importing a compressed file with a high compression ratio can overload the available memory for the application. In such cases, you can decompress the file before uploading. If decompression fails, you should contact your administrator about increasing the Java Heap Size memory.
NOTE: Publication of results in Snappy format may require additional configuration. See below.
NOTE: GZIP files on Hadoop are not split across multiple nodes. As a result, jobs can crash when processing it through a single Hadoop task. This is a known issue with GZIP on Hadoop.
Where possible, limit the size of your GZIP files to 100 MB of less, or use BZIP2 as an alternative compression method. As a workaround, you can try to run the job on the unzipped file. You may also disable profiling for the job. See Run Job Page in the User Guide.
Read Native File Formats
Write Native File Formats
|Avro||Supported; always on|
|Hive||Supported; always on|
Additional Configuration for File Format Support