Page tree

Trifacta Wrangler Pro is no longer available. This space will be removed soon. Please visit this page instead: Publishing Dialog

   



When a job has successfully completed, you can publish your job results to one of your connected datastores. In the Job Details page, click the Output Destinations tab. Then, click Publish.

Limitations

  • You cannot publish ad-hoc results for a job when another publishing job is in progress for the same job through the application. Please wait until the previous job has been published before retrying to publish the failing job. This is a known issue.

JSON-formatted files that are generated by Trifacta Wrangler Pro are rendered in JSON Lines format, which is a single line per-record variant of JSON. For more information, see http://jsonlines.org.


Publish to Tableau Server

If you have created a Tableau Server connection, you can export results that have been generated to the connected server.

Supported Formats:

  • Hyper: Results are written to your Tableau Server in Hyper format.

Options:

  • Connection: If you have created multiple connections to Tableau Server, please select the connection to use from the list.
  • Project Name: Name of the Tableau Server project.
  • Datasource Name: Name of the Tableau Server datasource. This value is displayed for selection in Tableau Server.

Data Option:

If you are publishing to a pre-existing table, schema validation is automatically performed.

  • Create new datasource: The platform creates the datasource and then loads it with the results from this job. If you attempt to use this option on a source that already exists, the publishing job fails, and an error is generated in the log.
  • Append data to existing datasource: The results from this job are appended to the data that is already stored in Tableau Server. If you attempt to append to a source that does not exist, the publishing job fails, and an error is generated in the log. Append operations also fail if you publish to a target with a different schema.

  • Replace contents of existing datasource: Target datasource is dropped. A new datasource is created using the schema of the generated output and filled with the job results.


This page has no comments.