D toc |
---|
This section contains information on the fie formats and compression schemes that are supported for input to and output of
D s product | ||
---|---|---|
|
Info |
---|
NOTE: To work with formats that are proprietary to a desktop application, such as Microsoft Excel, you do not need the supporting application installed on your desktop. |
Native Input File Formats
D s product | ||
---|---|---|
|
Excel (XLS/XLSX)
Tip Tip: You may import multiple worksheets from a single workbook at one time. See Import Excel Data.
- CSV
JSON, including nested
Info NOTE:
requires that JSON files be submitted with one valid JSON object per line. Consistently malformed JSON objects or objects that overlap linebreaks might cause import to fail. See Initial Parsing Steps.D s product - Plain Text
- LOG
- TSV
Parquet
Info NOTE: When working with datasets sourced from Parquet files, lineage information and the
$sourcerownumber
reference are not supported.
XML
Info NOTE: XML files can be ingested as unstructured text. XML support is not enabled by default. For more information, please contact
.D s proserv
Avro
For more information on data is handled initially, see Initial Parsing Steps.
Native Output File Formats
D s product |
---|
- CSV
- JSON
Tableau (TDE)
Info NOTE: Publication of results in TDE format may require additional configuration. See below.
Avro
Info NOTE: The
and Spark running environments apply Snappy compression to this format.D s photon
Parquet
Info NOTE: The
and Spark running environments apply Snappy compression to this format.D s photon
Compression Algorithms
Info |
---|
NOTE: Importing a compressed file with a high compression ratio can overload the available memory for the application. In such cases, you can decompress the file before uploading. If decompression fails, you should contact your administrator about increasing the Java Heap Size memory. |
Info |
---|
NOTE: Publication of results in Snappy format may require additional configuration. See below. |
Info |
---|
NOTE: GZIP files on Hadoop are not split across multiple nodes. As a result, jobs can crash when processing it through a single Hadoop task. This is a known issue with GZIP on Hadoop. Where possible, limit the size of your GZIP files to 100 MB of less, or use BZIP2 as an alternative compression method. As a workaround, you can try to run the job on the unzipped file. You may also disable profiling for the job. See Run Job Page. |
Read Native File Formats
GZIP | BZIP | Snappy | |
CSV | Supported | Supported | Supported |
JSON | Supported | Supported | Supported |
Avro | Supported | ||
Hive | Supported |
Write Native File Formats
GZIP | BZIP | Snappy | |
CSV | Supported | Supported | Supported |
JSON | Supported | Supported | Supported |
Avro | Supported; always on | ||
Hive | Supported; always on |
Additional Configuration for File Format Support
Publication of some formats requires execute permissions
When job results are generated and published in the following formats, the
D s platform |
---|
/tmp
on the D s node |
---|
In many environments, execute permissions are disabled on /tmp
for security reasons. Use the steps below to specify the temporary directory where this binary can be moved and executed.
Steps:
- Login to the application as an administrator.
- From the menu, select Settings menu > Settings > Admin Settings.
For each of the following file formats, locate the listed parameter, where the related binary code can be executed:
File Format Parameter Setting to Add Snappy "data-service.jvmOptions"
-Dorg.xerial.snappy.tempdir=<some executable directory>
TDE "batch-job-runner.jvmOptions"
-Djna.tmpdir=<some executable directory>
Save your changes and restart the platform.
Run a job configured for direct publication of the modified file format.