D s ed | ||||
---|---|---|---|---|
|
D toc |
---|
Overview
D s terms | ||
---|---|---|
|
Info | ||
---|---|---|
NOTE: If you need to make changes for purposes of a specific job run, you can add overrides to the request for the job. These overrides apply only for the current job. For more information, see
|
Basic Workflow
Here's the basic workflow described in this section.
- Get the internal identifier for the recipe for which you are building outputs.
Create the outputObject for the recipe.
- Create a writeSettings object and associate it with the outputObject.
- Run a test job, if desired.
- For any publication, get the internal identifier for the connection to use.
- Create a publication object and associate it with the outputObject.
- Run your job.
Variations
If you are generating exclusively file-based or relational outputs, you can vary this workflow in the following ways:
For file-based outputs:
- Get the internal identifier for the recipe for which you are building outputs.
Create the outputObject for the recipe.
- Create a writeSettings object and associate it with the outputObject.
- Run your job.
For relational outputs:
- Get the internal identifier for the recipe for which you are building outputs.
Create the outputObject for the recipe.
- For any publication, get the internal identifier for the connection to use.
- Create a publication object and associate it with the outputObject.
- Run your job.
Step - Get Recipe ID
To begin, you need the internal identifier for the recipe.
Info |
---|
NOTE: In the APIs, a recipe is identified by its internal name, a wrangled dataset. |
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/wrangledDatasets |
---|---|
Authentication | Required |
Method | GET |
Request Body | None. |
Response:
Status Code | 200 - OK | ||
---|---|---|---|
Response Body |
|
cURL example:
Code Block |
---|
curl -X GET \ http://www.wrangle-dev.example.com:3005/v4/wrangledDatasets \ -H 'authorization: Basic <auth_token>' \ -H 'cache-control: no-cache' |
D s terms | ||
---|---|---|
|
Tip |
---|
Checkpoint: In the above, let's assume that the recipe identifier of interest is |
For more information, see
D s api refdoclink |
---|
operation/getWrangledDataset |
Step - Create outputObject
Create the outputObject and associate it with the recipe identifier. In the following request, the wrangledDataset identifier that you retrieved in the previous call is applied as the flowNodeId
value.
The following example includes an embedded writeSettings
object, which generates a CSV file output. You can remove this embedded object if desired, but you must create a writeSettings
object before you can generate an output.
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/outputObjects | ||
---|---|---|---|
Authentication | Required | ||
Method | POST | ||
Request Body |
|
Response:
Status Code | 201 - Created | ||
---|---|---|---|
Response Body |
|
cURL example:
Code Block |
---|
curl -X POST \ http://www.wrangle-dev.example.com/v4/outputObjects \ -H 'authorization: Basic <auth_token>' \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -d '{ "execution": "photon", "profiler": true, "isAdhoc": true, "writeSettings": { "data": [ { "delim": ",", "path": "hdfs://hadoop:50070/trifacta/queryResults/admin@example.com/POS_01.avro", "action": "create", "format": "avro", "compression": "none", "header": false, "asSingleFile": false, "prefix": null, "suffix": "_increment", "hasQuotes": false } ] }, "flowNode": { "id": 11 } }' |
D s terms | ||
---|---|---|
|
Tip |
---|
Checkpoint: You've created an outputObject ( |
For more information, see
D s api refdoclink |
---|
operation/createOutputObject |
Step - Run a Test Job
Now that outputs have been defined for the recipe, you can just execute a job on the specified recipe flowNodeId=11
:
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/jobGroups | ||
---|---|---|---|
Authentication | Required | ||
Method | POST | ||
Request Body |
|
Response:
Status Code | 201 - Created | ||
---|---|---|---|
Response Body |
|
Info | ||
---|---|---|
NOTE: To re-run the job against its currently specified outputs, writeSettings, and publications, you only need the recipe ID. If you need to make changes for purposes of a specific job run, you can add overrides to the request for the job. These overrides apply only for the current job. For more information, see
|
To track the status of the job:
- You can monitor the progress through the application.
You can monitor progress through the
status
field by querying the specific job. For more information, seeD s api refdoclink operation/getJobGroup
Tip |
---|
Checkpoint: You've run a job, generating one output in Avro format. |
Step - Create writeSettings Object
Suppose you want to create another file-based output for this outputObject. You can create a second writeSettings object, which publishes the results of the job run on the recipe to the specified location.
The following example creates settings for generating a parquet-based output.
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/writeSettings/ | ||
---|---|---|---|
Authentication | Required | ||
Method | POST | ||
Request Body |
|
Response:
Status Code | 201 - Created | ||
---|---|---|---|
Response Body |
|
cURL example:
Code Block |
---|
curl -X POST \ http://www.wrangle-dev.example.com/v4/writeSettings \ -H 'authorization: Basic <auth_token>' \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -d '{ "delim": ",", "path": "hdfs://hadoop:50070/trifacta/queryResults/admin@example.com/POS_r03.pqt", "action": "create", "format": "pqt", "compression": "none", "header": false, "asSingleFile": false, "prefix": null, "suffix": "_increment", "hasQuotes": false, "outputObject": { "id": 4 } } |
D s terms | ||
---|---|---|
|
Tip |
---|
Checkpoint: You've added a new writeSettings object and associated it with your outputObject ( |
For more information, see
D s api refdoclink |
---|
operation/createWriteSetting |
Step - Get Connection ID for Publication
To generate a publication, you must identify the connection through which you are publishing the results.
Below, the request returns a single connection to Hive (id=1
).
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/connections |
---|---|
Authentication | Required |
Method | GET |
Request Body | None. |
Response:
Status Code | 200 - OK | ||
---|---|---|---|
Response Body |
|
cURL example:
Code Block |
---|
curl -X GET \ http://www.wrangle-dev.example.com/v4/connections \ -H 'authorization: Basic <auth_token>' \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' |
D s terms | ||
---|---|---|
|
For more information, see
D s api refdoclink |
---|
operation/listConnections |
Step - Create a Publication
You can create publications that publish table-based outputs through specified connections. In the following, a Hive table is written out to the default
database through connectionId = 1. This publication is associated with the outputObject id=4.
Request:
Endpoint | http://www.wrangle-dev.example.com:3005/v4/publications | ||
---|---|---|---|
Authentication | Required | ||
Method | POST | ||
Request Body |
|
Response:
Status Code | 201 - Created | ||
---|---|---|---|
Response Body |
|
cURL example:
Code Block |
---|
curl -X POST \ http://example.com:3005/v4/publications \ -H 'authorization: Basic <auth_token>' \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -d '{ "path": [ "default" ], "tableName": "myPublishedHiveTable", "targetType": "hive", "action": "create", "outputObject": { "id": 4 }, "connection": { "id": 1 } }' |
D s terms | ||
---|---|---|
|
For more information, see
D s api refdoclink |
---|
operation/createPublication |
Tip |
---|
Checkpoint: You're done. |
You have done the following:
- Created an output object:
- Embedded a writeSettings object to define an Avro output.
- Associated the outputObject with a recipe.
- Added another writeSettings object to the outputObject.
- Added a table-based publication object to the outputObject.
You can now generate results for these three different outputs whenever you run a job (create a jobgroup) for the associated recipe.
Step - Apply Overrides
When you are publishing results to a relational source, you can optionally apply overrides to the job to redirect the output or change the action applied to the target table. For more information, see API Workflow - Run Job.
Step - Apply
D s dataflow |
---|
D s ed | ||||
---|---|---|---|---|
|
Info | |
---|---|
NOTE: Overrides applied to the output objects are merged with any overrides specified as part of the jobGroup at the time of execution. For more information, see API Workflow - Run Job. If neither object has a specified override for a
|
You can optionally submit override values for a predefined set of
D s dataflow |
---|
Info |
---|
NOTE: If you are using automatic VPC network mode, then |
Tip |
---|
Tip: You can apply job overrides to the job itself, instead of applying overrides to the outputobject. For more information, see API Workflow - Run Job. |
Example - Apply labels to output object
In the following example, an existing outputObject (id=4) is modified to include override values for the labels of the job. Each property and its value as specified as a key-value pair in the request:
Request:
Endpoint | https://www.api.clouddataprep.com/v4/outputObjects/4 | ||
---|---|---|---|
Authentication | Required | ||
Method | PATCH | ||
Request Body |
|
Response:
Status Code | 200 - Ok | ||
---|---|---|---|
Response Body |
|
cURL example:
Code Block |
---|
curl -X PATCH \ http://www.wrangle-dev.example.com/v4/outputObjects/4 \ -H 'authorization: Bearer <auth_token>' \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -d '{ "execution": "dataflow", "profiler": true, "outputObjectDataflowOptions": { "region": "us-central1", "zone": "us-central1-a", "machineType": "n1-standard-64", "network": "my-network-name", "subnetwork": "regions/us-central1/subnetworks/my-subnetwork", "autoscalingAlgorithm": "THROUGHPUT_BASED", "serviceAccount": "my-service-account-name@<project-id>.iam.gserviceaccount.com", "numWorkers": "1", "maxNumWorkers": "1000", "usePublicIps": "true", "labels": [ { "key": "my-billing-label-key", "value": "my-billing-label-value" } ] } }' |
D s terms | ||
---|---|---|
|
Notes on properties:
If a network value, subnetwork value, or both is specified, then the VPC mode is custom. This setting is available in the UI for convenience.
You can submit empty or null values for property values in the payload. These values are submitted.
- If you are not using auto-scaling on your job:
"autoscalingAlgorithm": "NONE",
- Use "
numWorkers
" instead to specify the number of compute nodes to use for the job. D s ed editions gdppr oneLine true
- If you are using auto-scaling on your job:
"autoscalingAlgorithm": "throughput_based",
- Use the
"maxNumWorkers"
and "numWorkers
" instead to specify the number of compute nodes to use for the job.D s ed editions gdppr oneLine true
Notes on labels:
You can use labels to assign billing information for the job in your project.
D s ed | ||||
---|---|---|---|---|
|
Key: This value must be unique among your job labels.
Value: Assign based on the accepted values for the label.
For more information, see https://cloud.google.com/resource-manager/docs/creating-managing-labels.
You can apply up to 64 labels for a job. For more information on the available properties, see Dataflow Execution Settings.
Example - Override VPC settings
In the following example, an existing outputObject (id=4) is modified to override the VPC settings to use a non-local VPC:
Request:
Endpoint | https://www.api.clouddataprep.com/v4/outputObjects/4 | ||
---|---|---|---|
Authentication | Required | ||
Method | PATCH | ||
Request Body |
|
Response:
Status Code | 200 - Ok | ||
---|---|---|---|
Response Body |
|
cURL example:
Code Block |
---|
curl -X PATCH \ http://www.wrangle-dev.example.com/v4/outputObjects/4 \ -H 'authorization: Bearer <auth_token>' \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -d '{ "execution": "dataflow", "outputObjectDataflowOptions:" { "region": "second-region", "zone": "us-central1-a", "network": "my-other-network", "subnetwork": "regions/second-region/subnetworks/my-other-subnetwork" } }' |
D s terms | ||
---|---|---|
|
Notes on properties:
If a network value, subnetwork value, or both is specified, then the VPC mode is custom. This setting is available in the UI for convenience.
Subnetwork values must be specified as a short URL or a full URL.
To specify the VPC associated with a different project to which you have access, use the full URL pattern for the subnetwork value:
Code Block https://www.googleapis.com/compute/v1/projects/<HOST_PROJECT_ID>/regions/<REGION>/subnetworks/<SUBNETWORK>
<HOST_PROJECT_ID>
corresponds to the project identifier. This value must be between 6 and 30 characters. The value can contain only lowercase letters, digits, or hyphens. It must start with a letter. Trailing hyphens are prohibited.To specify a different VPC subnetwork, you can also use a short URL pattern for the subnetwork value:
Code Block regions/<REGION>/subnetworks/<SUBNETWORK>
For more information on these properties, see Dataflow Execution Settings.