Overview
Through the APIs, you can separately manage the outputs associated with an individual recipe. This workflow describes how to create output objects, which are associated with your recipe, and how to publish those outputs to different datastores in varying formats. You can continue to modify the output objects and their related write settings and publications independently of managing the wrangling process. Whenever you need new results, you can reference the wrangled dataset with which your outputs have been associated, and the job is executed and published in the appropriate manner to your targets.
Info |
---|
NOTE: If you need to make changes for purposes of a specific job run, you can add overrides to the request for the job. These overrides apply only for the current job. For more information, see D s api refdoclink |
---|
operation/runJobGroup |
|
Basic Workflow
Here's the basic workflow described in this section.
- Get the internal identifier for the recipe for which you are building outputs.
Create the outputObject for the recipe.
- Create a writeSettings object and associate it with the outputObject.
- Run a test job, if desired.
- For any publication, get the internal identifier for the connection to use.
- Create a publication object and associate it with the outputObject.
- Run your job.
Variations
If you are generating exclusively file-based or relational outputs, you can vary this workflow in the following ways:
For file-based outputs:
- Get the internal identifier for the recipe for which you are building outputs.
Create the outputObject for the recipe.
- Create a writeSettings object and associate it with the outputObject.
- Run your job.
For relational outputs:
- Get the internal identifier for the recipe for which you are building outputs.
Create the outputObject for the recipe.
- For any publication, get the internal identifier for the connection to use.
- Create a publication object and associate it with the outputObject.
- Run your job.
Step - Get Recipe ID
To begin, you need the internal identifier for the recipe.
Info |
---|
NOTE: In the APIs, a recipe is identified by its internal name, a wrangled dataset. |
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/wrangledDatasets |
---|
Authentication | Required |
---|
Method | GET |
---|
Request Body | |
---|
Response:
Status Code | 200 - OK |
---|
Response Body | Code Block |
---|
{
"data": [
{
"id": 11,
"wrangled": true,
"createdAt": "2018-11-12T23:06:36.473Z",
"updatedAt": "2018-11-12T23:06:36.539Z",
"recipe": {
"id": 10
},
"name": "POS-r01",
"description": null,
"referenceInfo": null,
"activeSample": {
"id": 11
},
"creator": {
"id": 1
},
"updater": {
"id": 1
},
"flow": {
"id": 4
}
},
{
"id": 1,
"wrangled": true,
"createdAt": "2018-11-12T23:19:57.650Z",
"updatedAt": "2018-11-12T23:20:47.297Z",
"recipe": {
"id": 19
},
"name": "member_info",
"description": null,
"referenceInfo": null,
"activeSample": {
"id": 20
},
"creator": {
"id": 1
},
"updater": {
"id": 1
},
"flow": {
"id": 6
}
}
]
} |
|
---|
cURL example:
Code Block |
---|
curl -X GET \
http://www.wrangle-dev.example.com:3005/v4/wrangledDatasets \
-H 'authorization: Basic <auth_token>' \
-H 'cache-control: no-cache' |
Tip |
---|
Checkpoint: In the above, let's assume that the recipe identifier of interest is wrangledDataset=11 . This means that the flow where it is hosted is flow.id=4 . Retain this information for later. |
For more information, see
D s api refdoclink |
---|
operation/getWrangledDataset |
Step - Create outputObject
Create the outputObject and associate it with the recipe identifier. In the following request, the wrangledDataset identifier that you retrieved in the previous call is applied as the flowNodeId
value.
The following example includes an embedded writeSettings
object, which generates a CSV file output. You can remove this embedded object if desired, but you must create a writeSettings
object before you can generate an output.
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/outputObjects |
---|
Authentication | Required |
---|
Method | POST |
---|
Request Body | Code Block |
---|
{
"execution": "photon",
"profiler": true,
"isAdhoc": true,
"writeSettings": {
"data": [
{
"delim": ",",
"path": "hdfs://hadoop:50070/trifacta/queryResults/admin@example.com/POS_01.avro",
"action": "create",
"format": "avro",
"compression": "none",
"header": false,
"asSingleFile": false,
"prefix": null,
"suffix": "_increment",
"includeMismatches": true,
"hasQuotes": false
}
]
},
"flowNode": {
"id": 11
}
} |
|
---|
Response:
Status Code | 201 - Created |
---|
Response Body | Code Block |
---|
{
"id": 4,
"execution": "photon",
"profiler": true,
"isAdhoc": true,
"updatedAt": "2018-11-13T00:20:49.258Z",
"createdAt": "2018-11-13T00:20:49.258Z",
"creator": {
"id": 1
},
"updater": {
"id": 1
},
"flowNode": {
"id": 11
}
} |
|
---|
cURL example:
Code Block |
---|
curl -X POST \
http://www.wrangle-dev.example.com/v4/outputObjects \
-H 'authorization: Basic <auth_token>' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"execution": "photon",
"profiler": true,
"isAdhoc": true,
"writeSettings": {
"data": [
{
"delim": ",",
"path": "hdfs://hadoop:50070/trifacta/queryResults/admin@example.com/POS_01.avro",
"action": "create",
"format": "avro",
"compression": "none",
"header": false,
"asSingleFile": false,
"prefix": null,
"suffix": "_increment",
"includeMismatches": true,
"hasQuotes": false
}
]
},
"flowNode": {
"id": 11
}
}' |
Tip |
---|
Checkpoint: You've created an outputObject (id=4 ) and an embedded writeSettings object and have associated them with the appropriate recipe flowNodeId=11 . You can now run a job for this recipe generating the specified output. |
For more information, see
D s api refdoclink |
---|
operation/createOutputObject |
Step - Run a Test Job
Now that outputs have been defined for the recipe, you can just execute a job on the specified recipe flowNodeId=11
:
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/jobGroups |
---|
Authentication | Required |
---|
Method | POST |
---|
Request Body | Code Block |
---|
{
"wrangledDataset": {
"id": 11
}
} |
|
---|
Response:
Status Code | 201 - Created |
---|
Response Body | Code Block |
---|
{
"sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1",
"reason": "JobStarted",
"jobGraph": {
"vertices": [
21,
22
],
"edges": [
{
"source": 21,
"target": 22
}
]
},
"id": 2,
"jobs": {
"data": [
{
"id": 21
},
{
"id": 22
}
]
}
} |
|
---|
Info |
---|
NOTE: To re-run the job against its currently specified outputs, writeSettings, and publications, you only need the recipe ID. If you need to make changes for purposes of a specific job run, you can add overrides to the request for the job. These overrides apply only for the current job. For more information, see D s api refdoclink |
---|
operation/runJobGroup |
|
To track the status of the job:
Tip |
---|
Checkpoint: You've run a job, generating one output in Avro format. |
Step - Create writeSettings Object
Suppose you want to create another file-based output for this outputObject. You can create a second writeSettings object, which publishes the results of the job run on the recipe to the specified location.
The following example creates settings for generating a parquet-based output.
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/writeSettings/ |
---|
Authentication | Required |
---|
Method | POST |
---|
Request Body | Code Block |
---|
{
"delim": ",",
"path": "hdfs://hadoop:50070/trifacta/queryResults/admin@example.com/POS_r03.pqt",
"action": "create",
"format": "pqt",
"compression": "none",
"header": false,
"asSingleFile": false,
"prefix": null,
"suffix": "_increment",
"hasQuotes": false,
"outputObjectId": 4
} |
|
---|
Response:
Status Code | 201 - Created |
---|
Response Body | Code Block |
---|
{
"delim": ",",
"id": 2,
"path": "hdfs://hadoop:50070/trifacta/queryResults/admin@example.com/POS_r03.pqt",
"action": "create",
"format": "pqt",
"compression": "none",
"header": false,
"asSingleFile": false,
"prefix": null,
"suffix": "_increment",
"hasQuotes": false,
"updatedAt": "2018-11-13T01:07:52.386Z",
"createdAt": "2018-11-13T01:07:52.386Z",
"creator": {
"id": 1
},
"updater": {
"id": 1
},
"outputObject": {
"id": 4
}
} |
|
---|
cURL example:
Code Block |
---|
curl -X POST \
http://www.wrangle-dev.example.com/v4/writeSettings \
-H 'authorization: Basic <auth_token>' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{ "delim": ",",
"path": "hdfs://hadoop:50070/trifacta/queryResults/admin@example.com/POS_r03.pqt",
"action": "create",
"format": "pqt",
"compression": "none",
"header": false,
"asSingleFile": false,
"prefix": null,
"suffix": "_increment",
"hasQuotes": false,
"outputObject": {
"id": 4
}
} |
Tip |
---|
Checkpoint: You've added a new writeSettings object and associated it with your outputObject (id=4 ). When you run the job again, the Parquet output is also generated. |
For more information, see
D s api refdoclink |
---|
operation/createWriteSetting |
Step - Get Connection ID for Publication
To generate a publication, you must identify the connection through which you are publishing the results.
Below, the request returns a single connection to Hive (id=1
).
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/connections |
---|
Authentication | Required |
---|
Method | GET |
---|
Request Body | |
---|
Response:
Status Code | 200 - OK |
---|
Response Body | Code Block |
---|
{
"data": [
{
"id": 1,
"host": "hadoop",
"port": 10000,
"vendor": "hive",
"params": {
"jdbc": "hive2",
"connectStringOptions": "",
"defaultDatabase": "default"
},
"ssl": false,
"vendorName": "hive",
"name": "hive",
"description": null,
"type": "jdbc",
"isGlobal": true,
"credentialType": "conf",
"credentialsShared": true,
"uuid": "28415970-e6c4-11e8-82be-9947a31ecdd5",
"disableTypeInference": false,
"createdAt": "2018-11-12T21:44:39.816Z",
"updatedAt": "2018-11-12T21:44:39.842Z",
"credentials": [],
"creator": {
"id": 1
},
"updater": {
"id": 1
},
"workspace": {
"id": 1
}
}
],
"count": 1
} |
|
---|
cURL example:
Code Block |
---|
curl -X GET \
http://www.wrangle-dev.example.com/v4/connections \
-H 'authorization: Basic <auth_token>' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' |
For more information, see
D s api refdoclink |
---|
operation/listConnections |
Step - Create a Publication
Example - Hive:
You can create publications that publish table-based outputs through specified connections. In the following, a Hive table is written out to the default
database through connectionId = 1. This publication is associated with the outputObject id=4.
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/publications |
---|
Authentication | Required |
---|
Method | POST |
---|
Request Body | Code Block |
---|
{
"path": [
"default"
],
"tableName": "myPublishedHiveTable",
"targetType": "hive",
"action": "create",
"outputObject": {
"id": 4
},
"connection": {
"id": 1
}
} |
|
---|
Response:
Status Code | 201 - Created |
---|
Response Body | Code Block |
---|
{
"path": [
"default"
],
"id": 3,
"tableName": "myPublishedHiveTable",
"targetType": "hive",
"action": "create",
"updatedAt": "2018-11-13T01:25:39.698Z",
"createdAt": "2018-11-13T01:25:39.698Z",
"creator": {
"id": 1
},
"updater": {
"id": 1
},
"outputObject": {
"id": 4
},
"connection": {
"id": 1
}
} |
|
---|
cURL example:
Code Block |
---|
curl -X POST \
http://example.com:3005/v4/publications \
-H 'authorization: Basic <auth_token>' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"path": [
"default"
],
"tableName": "myPublishedHiveTable",
"targetType": "hive",
"action": "create",
"outputObject": {
"id": 4
},
"connection": {
"id": 1
}
}' |
For more information, see
D s api refdoclink |
---|
operation/createPublication |
Tip |
---|
Checkpoint: You're done. |
You have done the following:
- Created an output object:
- Embedded a writeSettings object to define an Avro output.
- Associated the outputObject with a recipe.
- Added another writeSettings object to the outputObject.
- Added a table-based publication object to the outputObject.
You can now generate results for these three different outputs whenever you run a job (create a jobgroup) for the associated recipe.
Step - Apply Overrides
When you are publishing results to a relational source, you can optionally apply overrides to the job to redirect the output or change the action applied to the target table. For more information, see API Workflow - Run Job.
Step - Apply Spark Job Overrides
You can optionally submit override values for a predefined set of Spark properties on the output object. These overrides are applied each time that the outputobject is used to generate a set of results.
Info |
---|
NOTE: This feature and the Spark properties available for override must be configured by a workspace administrator. For more information, see Enable Spark Job Overrides. |
Tip |
---|
Tip: You can apply Spark job overrides to the job itself, instead of applying overrides to the outputobject. For more information, see API Workflow - Run Job. |
In the following example, an existing outputObject (id=4) is modified to include override values for the default set of Spark overrides. Each Spark property and its value as specified as a key-value pair in the request:
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/outputObjects/4 |
---|
Authentication | Required |
---|
Method | PUT |
---|
Request Body | Code Block |
---|
{
"execution": "spark",
"outputObjectSparkOptions": [
{
"key": "spark.driver.memory",
"value": "10G"
},
{
"key": "spark.executor.memory",
"value": "10G"
},
{
"key": "spark.executor.cores",
"value": "5"
},
{
"key": "transformer.dataframe.checkpoint.threshold",
"value": "450"
}
]
} |
|
---|
Response:
Status Code | 200 - Ok |
---|
Response Body | Code Block |
---|
{
"id": 4,
"updater": {
"id": 1
},
"updatedAt": "2020-03-21T00:27:00.937Z",
"createdAt": "2020-03-20T23:30:42.991Z"
} |
|
---|
cURL example:
Code Block |
---|
curl -X PUT \
http://www.wrangle-dev.example.com/v4/outputObjects/4 \
-H 'authorization: Basic <auth_token>' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"execution": "spark",
"outputObjectSparkOptions": [
{
"key": "spark.driver.memory",
"value": "10G"
},
{
"key": "spark.executor.memory",
"value": "10G"
},
{
"key": "spark.executor.cores",
"value": "5"
},
{
"key": "transformer.dataframe.checkpoint.threshold",
"value": "450"
}
]
}' |