When a job has successfully completed, you can publish your job results to one of your connected datastores. In the Job Details page, click the Output Destinations tab. Then, click Publish.

NOTE: You cannot publish ad-hoc results for a job when another publishing job is in progress for the same job through the application. Please wait until the previous job has been published before retrying to publish the failing job. This is a known issue.


NOTE: If you run a job and then attempt to export the results to a relational source, Datetime columns are written in the relational table as String values. Direct publication of Datetime columns publishes the output in the designated target data type. For more information, see Type Conversions.


NOTE: If you run a job with a single relational target and it fails at the publication step, you cannot publish the transformation job through the Export Results window.


NOTE: JSON-formatted files that are generated by are rendered in JSON Lines format, which is a single line per-record variant of JSON. For more information, see http://jsonlines.org.




Publishing dialog


Publish to Cloudera Navigator

NOTE: This feature must be enabled i your environment. For more information, see Configure Publishing to Cloudera Navigator.

If you have enabled the  to integrate with Cloudera Navigator, metadata information is automatically published to Cloudera Navigator.

NOTE: When Cloudera Navigator publishing is enabled, the  automatically attempts to publish to Navigator when the job completes. If it's successful, additional publishing is not necessary. These links may not be immediately available in Cloudera Navigator, which refreshes on a predefined polling interval.

Locating your job metadata in Cloudera Navigator:

  1. In the Job Details page, acquire the job ID from the Job summary in the Overview tab.
  2. Login to Navigator.
  3. Search for trifacta.<jobId>.
  4. If the job completed successfully and Navigator has been able to poll the platform for new job results, you should see individual entries for each sub-job of the job that completed:

    sub-job identifierDescription
    trifacta.<jobId>.wrangle.<subJobId1>Link to metadata on transformation job that was executed on the running environment.
    trifacta.<jobId>.filewriter.<subJobId2>Link to metadata on job that generated the results in the targeted datastore.


  5. Click any of these links to review metadata details about the job.

Publish to Hive

NOTE: If you created a publishing action to deliver results to Hive as part of this job definition, the Hive tab identifies the database and table where the results were written. Any available options are for ad-hoc publishing of results to Hive.

If you have enabled publishing to Hive, you can specify the database and table to which you would like to publish results.

NOTE: When launching the job, you must choose to generate results in Avro or Parquet format to publish to Hive. If you are publishing a wide dataset to Hive, you should generate results using Parquet.


NOTE: Some may be exported to Hive using different data types. For more information on how types are exported to Hive, see Hive Data Type Conversions.

Administrators can connect the  to an available instance of Hive. For more information, see Configure for Hive.

Hive publishing options:

Data Option:

If you are publishing to a pre-existing table, schema validation is automatically performed.

To export the job results to the designated Hive table, click Publish. Publication happens in the background as a . You can track status in the Jobs page. See Jobs Page.

Publish to SQL DW

To publish to Microsoft SQL DW storage, please specify the following information.

NOTE: Publishing to Microsoft SQL DW requires deployment of the on Azure and a base storage layer of WASB. For more information, see Configure for Azure.


NOTE: Results must be in Parquet format to publish to SQL DW.

Options:

Data Option:

If you are publishing to a pre-existing table, schema validation is automatically performed.

Publish to Redshift

If you have enabled publishing to Redshift, you can specify the database, schema, and table to which you would like to publish results.

Notes:

Administrators can connect the  to an available instance of Redshift. For more information, see Create Redshift Connections.

Publish to Tableau

If you have created a Tableau Server connection, you can export results that have been generated in TDE format to the connected server.

NOTE: Generated results must be in TDE format for export.


NOTE: If you encounter errors generating results in TDE format, additional configuration may be required. See Supported File Formats.

Options:

Data Option:

If you are publishing to a pre-existing table, schema validation is automatically performed.