Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version r0682

D s api version

D toc

For a specified jobGroup, this endpoint performs an ad-hoc publish of the results to the designated target.

  • Target information is based on the specified connection.
  • Job results to published are based on the specified jobGroup. 

You can specify:

  • Database and table to which to publish
  • Type of action to be applied to the target table. Details are below.

Supported targets:

  • Hive

  • Redshift
For more information on jobGroups, see API JobGroups Get v4.

For additional examples, see API Workflow - Publish Results.

Version:  v4

Required Permissions

D s api auth


Request Type: PUT


Code Block


<id>Internal identifier for the job group

Request URI - Example:

Code Block

Request Body - Hive:

Code Block
  "connection": { 
    "id": 1 
  "path": ["default"],
  "table": "test_table",
  "action": "create",
  "inputFormat": "pqt",
  "flowNodeId": 27


Response Status Code - Success:  200 - OK

Response Body Example:

Code Block
    "jobgroupId": 31,
    "reason": "JobStarted",
    "sessionId": "f6c5f350-2102-11e9-bb80-9faf7b15f235"


Request Reference:

connectionInternal identifier of the connection to use to write the results.
pathName of database to which to write the results. This value must be enclosed in square brackets.
tableName of table in the database to which to write the results.

Type of writing action to perform with the results. Supported actions:

  • create - Create a new table with each publication. This table is empty except for the schema, which is taken from the results. A new table receives a timestamp extension to its name.
  • load - Append a pre-existing table with the results of the data. The schema of the results and the table must match.
  • createAndLoad - Create a new table with each publication and load it with the results data. A new table receives a timestamp extension to its name.
  • truncateAndLoad - Truncate a pre-existing table and load it with fresh data from the results.
  • dropAndLoad - Drop the target table and load a new table with the schema and data from the results.

Source format of the results. Supported values:


  • avro
  • pqt



NOTE: For results to be written to Redshift, the source must be stored in S3 and accessed through an S3 connection.


NOTE: By default, data is published to Redshift using the public schema. To publish using a different schema, preface the table value with the name of the schema to use: MySchema.MyTable.


  • csv
  • json
  • avro

flowNodeIdThe internal identifier for the recipe (wrangledDataset) from which the job was executed.


For more information on the available status messages, see API JobGroups Put Publish v4.