This section contains information on the fie formats and compression schemes that are supported for input to and output of .
NOTE: To work with formats that are proprietary to a desktop application, such as Microsoft Excel, you do not need the supporting application installed on your desktop. |
NOTE: During import, the |
NOTE: Filenames that include special characters can cause problems during import or when publishing to a file-based datastore. |
Forbidden characters in import filenames:
The following list of characters present issues in the listed area of the product. If you encounter issues, the following listings may provide some guidance on where the issue occurred.
Tip: You should avoid using any of these characters in your import filenames. This list may not be complete for all available running environments. |
General:
"/" |
Seb browser:
"\" |
Excel filenames:
"#" |
Spark-based running environment:
"{", "*", "\" |
can read and import directly these file formats:
JSON v1, including nested
NOTE: JSON files can be read natively but often require additional work to properly structure into tabular format. Depending on how the |
NOTE: |
Parquet
NOTE: When working with datasets sourced from Parquet files, lineage information and the |
Avro
XML
Tip: XML files can be ingested as unstructured text. |
For more information on data is handled initially, see Initial Parsing Steps in the User Guide.
Files of the following type are not read into the product in their native format. Instead, these file types are converted using the Conversion Service into a file format that is natively supported, stored in the base storage layer, and then ingested for use in the product.
NOTE: Compressed files that require conversion of the underlying file format are not supported for use in the product. |
Converted file formats:
Excel (XLS/XLSX)
NOTE: Other Excel-related formats, such as XLSM format, are not supported. |
Tip: You may import multiple worksheets from a single workbook at one time. See Import Excel Data in the User Guide. |
NOTE: PDF support may need to be enabled in your environment. See Import PDF Data. |
Notes on JSON:
There are two methods of ingesting JSON files for use in the product.
JSON v2 - This newer version reads the JSON source file through the Conversion Service, which stores a restructured version of the data in tabular format on the base storage layer for quick and simple use within the application.
Tip: This method is enabled by default and is recommended. For more information, see Working with JSON v2. |
JSON v1 - This older version reads JSON files directly into the platform as text files. However, this method often requires additional work to restructure the data into tabular format. For more information, see Working with JSON v1.
can write to these file formats:
Tableau Hyper
NOTE: Publication of results in Hyper format may require additional configuration. See below. |
Avro
NOTE: The |
Parquet
NOTE: The |
When a file is imported, the attempts to infer the compression algorithm in use based on the filename extension. For example,
.gz
files are assumed to be compressed with GZIP.
NOTE: Import of a compressed file whose underlying format requires conversion through the Conversion Service is not supported. |
NOTE: Importing a compressed file with a high compression ratio can overload the available memory for the application. In such cases, you can decompress the file before uploading. If decompression fails, you should contact your administrator about increasing the Java Heap Size memory. |
NOTE: Publication of results in Snappy format may require additional configuration. See below. |
NOTE: GZIP files on Hadoop are not split across multiple nodes. As a result, jobs can crash when processing it through a single Hadoop task. This is a known issue with GZIP on Hadoop. Where possible, limit the size of your GZIP files to 100 MB of less, or use BZIP2 as an alternative compression method. As a workaround, you can try to run the job on the unzipped file. You may also disable profiling for the job. See Run Job Page in the User Guide. |
Tip: If preferred, you can configure the |
GZIP | BZIP | Snappy | Notes | |
CSV | Supported | Supported | Supported | |
JSON v2 | Not supported | Not supported | Not supported | A converted file format. See above. |
JSON v1 | Supported | Supported | Supported | Not a converted file format. See above. |
Avro | Supported | |||
Hive | Supported |
GZIP | BZIP | Snappy | |
CSV | Supported | Supported | Supported |
JSON | Supported | Supported | Supported |
Avro | Supported; always on | ||
Hive | Supported; always on |
supports the following variants of Snappy compression format:
File extension | Format name | Notes | |
---|---|---|---|
.sz | Framing2 format | See: https://github.com/google/snappy/blob/master/framing_format.txt | |
.snappy | Hadoop-snappy format | See: https://code.google.com/p/hadoop-snappy/
|
When job results are generated and published in the following formats, the includes a JAR, from which is extracted a binary executable into a temporary directory. From this directory, the binary is then executed to generate the results in the proper format. By default, this directory is set to
/tmp
on the .
In many environments, execute permissions are disabled on /tmp
for security reasons. Use the steps below to specify the temporary directory where this binary can be moved and executed.
Steps:
For each of the following file formats, locate the listed parameter, where the related binary code can be executed:
File Format | Parameter | Setting to Add |
---|---|---|
Snappy | "data-service.jvmOptions" | -Dorg.xerial.snappy.tempdir=<some executable directory> |
Hyper | See previous. | See previous. |
Save your changes and restart the platform.
Run a job configured for direct publication of the modified file format.