When a job has successfully completed, you can export the results through one of the following methods:

NOTE: You cannot publish ad-hoc results for a job when another publishing job is in progress for the same job through the application or the command line interface. Please wait until the previous job has been published before retrying to publish the failing job. This is a known issue.


NOTE: If you run a job and then attempt to export the results to a relational source, Datetime columns are written in the relational table as String values. Direct publication of Datetime columns publishes the output in the designated target data type. For more information, see Type Conversions.

NOTE: If you run a job with a single relational target and it fails at the publication step, you cannot publish the transformation job through the Export Results window.

Export Results window

There are multiple ways to export the results.

NOTE: You can only publish to destinations that are applicable to your base storage layer. For example, results written to HDFS can be published to Hive and cannot be published to Redshift. Base storage layer cannot be modified after install. See Set Base Storage Layer.

Direct file download

NOTE: If none of these options is available, data download may have been disabled by an administrator. See Admin Settings Page.

Publish to HDFS

When a job is run on Hadoop, results are published to the specified locations on HDFS.

Tip: These locations are available through the job results stored in the Jobs page. If these results are used by another analytics tool, you may want to copy these locations from this window.

Create Dataset

Optionally, you can turn your generated results into new datasets for immediate use in . Select the format of the new dataset and click Create.

NOTE: If you generated results in Parquet format only, you cannot create a dataset from it, even if the Create button is present. This is a known issue.

NOTE: When you create a new dataset as part of your job results, the file or files are written to the designated output location for your user account. Depending on your backend datastore permissions are configured, this location may not be accessible to other users.

After the new output has been written, you can create new recipes from it. See Build Sequence of Datasets.

Publish to Hive

NOTE: If you created a publishing action to deliver results to Hive as part of this job definition, the Hive tab identifies the database and table where the results were written. Any available options here are for ad-hoc publishing of results to Hive.

If you have enabled publishing to Hive, you can specify the database and table to which you would like to publish results.

NOTE: When launching the job, you must choose to generate results in Avro or Parquet format to publish to Hive. If you are publishing a wide dataset to Hive, you should generate results using Parquet.

NOTE: Some may be exported to Hive using different data types. For more information on how types are exported to Hive, see Hive Data Type Conversions.

Administrators can connect the  to an available instance of Hive. For more information, see Configure for Hive.

Hive Publishing Options

Options:

Data Option:

If you are publishing to a pre-existing table, schema validation is automatically performed.

To export the job results to the designated Hive table, click Publish. Publication happens in the background as a . You can track status in the Jobs page. See Jobs Page.

Publish to Redshift

If you have enabled publishing to Redshift, you can specify the database, schema, and table to which you would like to publish results.

Notes:

Administrators can connect the  to an available instance of Redshift. For more information, see Create Redshift Connections.

Publish to SQL DW

To publish to Microsoft SQL DW storage, please specify the following information.

NOTE: Publishing to Microsoft SQL DW requires deployment of the on Azure and a base storage layer of WASB. For more information, see Configure for Azure.

NOTE: Results must be in Parquet format to publish to SQL DW.

Options:

Data Option:

If you are publishing to a pre-existing table, schema validation is automatically performed.

Publish to Tableau

If you have created a Tableau Server connection, you can export results that have been generated in TDE format to the connected server.

NOTE: Generated results must be in TDE format for export.

NOTE: If you encounter errors generating results in TDE format, additional configuration may be required. See Supported File Formats.

Options:

Data Option:

If you are publishing to a pre-existing table, schema validation is automatically performed.