Contents:
This section describes how to run a job using the APIs available in Designer Cloud Enterprise Edition.
A note about API URLs: In the listed examples, URLs are referenced in the following manner:
In your product, these map references map to the following:
For more information, see API Reference.<protocol>://<platform_base_url>/
<http or https>://<hostname>:<port_number>/
Pre-requisites
Before you begin, you should verify the following:
Get authentication credentials. As part of each request, you must pass in authentication credentials to the platform.
Tip: The recommended method is to use an access token, which can be generated from the Designer Cloud application. For more information, see Access Tokens Page.
For more information, see API Authentication.
- Verify job execution. Run the desired job through the Designer Cloud application and verify that the output objects are properly generated.
Acquire recipe (wrangled dataset) identifier. In Flow View, click the icon for the recipe whose outputs you wish to generate. Acquire the numeric value for the recipe from the URL. In the following, the recipe Id is
28629
:http://<platform_base_url>/flows/5479?recipe=28629&tab=recipe
- Create output object. A recipe must have at least one output object created for it before you can run a job via APIs. For more information, see Flow View Page.
If you wish to apply overrides to the inputs or outputs of the recipe, you should acquire those identifiers or paths now. For more information, see "Run Job with Parameter Overrides" below.
Step - Run Job
Through the APIs, you can specify and run a job. To run a job with all default settings, construct a request like the following:
NOTE: A wrangledDataset
is an internal object name for the recipe that you wish to run. Please see previous section for how to acquire this value.
Endpoint |
/v4/jobGroups |
---|---|
Authentication | Required |
Method | POST |
Request Body | { "wrangledDataset": { "id": 28629 } } |
Response Code | 201 - Created |
Response Body | { "sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1", "reason": "JobStarted", "jobGraph": { "vertices": [ 21, 22 ], "edges": [ { "source": 21, "target": 22 } ] }, "id": 961247, "jobs": { "data": [ { "id": 21 }, { "id": 22 } ] } } |
If the 201
response code is returned, then the job has been queued for execution.
Tip: Retain the id
value in the response. In the above, 961247
is the internal identifier for the job group for the job. You will need this value to check on your job status.
For more information, see API JobGroups Create v4.
Checkpoint: You have queued your job for execution.
Step - Monitoring Your Job
You can monitor the status of your job through the following endpoint:
Endpoint | <protocol>://<platform_base_url>/v4/jobGroups/<id> |
---|---|
Authentication | Required |
Method | GET |
Request Body | None. |
Response Code | 200 - Ok |
Response Body | { "id": 961247, "name": null, "description": null, "ranfrom": "ui", "ranfor": "recipe", "status": "Complete", "profilingEnabled": true, "runParameterReferenceDate": "2019-08-20T17:46:27.000Z", "createdAt": "2019-08-20T17:46:28.000Z", "updatedAt": "2019-08-20T17:53:17.000Z", "workspace": { "id": 22 }, "creator": { "id": 38 }, "updater": { "id": 38 }, "snapshot": { "id": 774476 }, "wrangledDataset": { "id": 28629 }, "flowRun": null } |
When the job has successfully completed, the returned status message includes the following:
"status": "Complete",
For more information, see API JobGroups Get v4.
Tip: You have executed the job. Results have been delivered to the designated output locations.
Step - Re-run Job
In the future, you can re-run the job using the same, simple request:
Endpoint | <protocol>://<platform_base_url>/v4/jobGroups |
---|---|
Authentication | Required |
Method | POST |
Request Body | { "wrangledDataset": { "id": 28629 } } |
The job is re-run as it was previously specified.
For more information, see API JobGroups Create v4.
Step - Run Job with Overrides - Files
As needed, you can specify runtime overrides for any of the settings related to the job definition or its outputs. For file-based jobs, these overrides include:
- Execution environment
- profiling
- Output file, format, and other settings
NOTE: Override values applied to a job are not validated. Invalid overrides may cause your job to fail.
- Acquire the internal identifier for the recipe for which you wish to execute a job. In the previous example, this identifier was
28629
. Construct a request using the following:
Endpoint <protocol>://<platform_base_url>/v4/jobGroups
Authentication Required Method POST
Request Body:
{ "wrangledDataset": { "id": 28629 }, "overrides": { "profiler": true, "execution": "spark", "writesettings": [ { "path": "<new_path_to_output>", "format": "csv", "header": true, "asSingleFile": true } ] }, "ranfrom": null }
- In the above example, the job has been launched with the following overrides:
Job will be executed on the Spark cluster. Other supported values depend on your deployment:
Value for overrides.execution
Description photon
Running environment on Alteryx node
spark
Spark on integrated cluster, with the following exceptions. databricksSpark
Spark on Azure Databricks
emrSpark
Spark on AWS EMR
- Job will be executed with profiling enabled.
- Output is written to a new file path.
- Output format is CSV to the designated path.
- Output has a header and is generated as a single file.
A response code of
201 - Created
is returned. The response body should look like the following:{ "sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1", "reason": "JobStarted", "jobGraph": { "vertices": [ 21, 22 ], "edges": [ { "source": 21, "target": 22 } ] }, "id": 962221, "jobs": { "data": [ { "id": 21 }, { "id": 22 } ] } }
Retain the
id
value, which is the job identifier, for monitoring.
Step - Run Job with Overrides - Tables
You can also pass job definition overrides for table-based outputs. For table outputs, overrides include:
- Path to database to which to write (must have write access)
Connection to write to the target.
Tip: This identifier is for the connection used to write to the target system. This connection must already exist. For more information on how to retrieve the identifier for a connection, see API Connections Get List v4.
- Name of output table
Target table type
Tip: You can acquire the target type from the
vendor
value in the connection response. For more information, see API Connections Get List v4.action:
Key value Description create
Create a new table with each publication. createAndLoad
Append your data to the table. truncateAndLoad
Truncate the table and load it with your data. dropAndLoad
Drop the table and write the new table in its place. - Identifier of connection to use to write data.
- Acquire the internal identifier for the recipe for which you wish to execute a job. In the previous example, this identifier was
28629
. Construct a request using the following:
Endpoint <protocol>://<platform_base_url>/v4/jobGroups
Authentication Required Method POST
Request Body:
{ "wrangledDataset": { "id": 28629 }, "overrides": { "publications": [ { "path": "["prod_db"]", "tableName": "Table_CaseFctn2", "action": "createAndLoad", "targetType": "postgres", "connectionId": 3, } ] }, "ranfrom": null }
In the above example, the job has been launched with the following overrides:
NOTE: When overrides are applied to publishing, any publications that are already attached to the recipe are ignored.
- Output path is to the
prod_db
database, using table name isTable_CaseFctn2
. - Output action is "create and load." See above for definitions.
- Target table type is a PostgreSQL table.
- Output path is to the
A response code of
201 - Created
is returned. The response body should look like the following:{ "sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1", "reason": "JobStarted", "jobGraph": { "vertices": [ 21, 22 ], "edges": [ { "source": 21, "target": 22 } ] }, "id": 962222, "jobs": { "data": [ { "id": 21 }, { "id": 22 } ] } }
Retain the
id
value, which is the job identifier, for monitoring.
Step - Run Job with Overrides - Webhooks
When you execute a job, you can pass in a set of parameters as overrides to generate a webhook message to a third-party application, based on the success or failure of the job.
For more information on webhooks, see Create Flow Webhook Task.
- Acquire the internal identifier for the recipe for which you wish to execute a job. In the previous example, this identifier was
28629
. Construct a request using the following:
Endpoint <protocol>://<platform_base_url>/v4/jobGroups
Authentication Required Method POST
Request Body:
{ "wrangledDataset": { "id": 28629 }, "overrides": { "webhooks": [{ "name": "webhook override", "url": "http://example.com", "method": "post", "triggerEvent": "onJobFailure", "body": { "text": "override" }, "headers": { "testHeader": "val1" }, "sslVerification": true, "secretKey": "123", }] } }
In the above example, the job has been launched with the following overrides:
Override setting Description name Name of the webhook. url URL to which to send the webhook message. method The HTTP method to use. Supported values: POST
,PUT
,PATCH
,GET
, or DELETE. Body is ignored forGET
andDELETE
methods.triggerEvent Supported values:
onJobFailure
- send webhook message if job failsonJobSuccess
- send webhook message if job completes successfullyonJobDone
- send webhook message when job fails or finishes successfullybody (optional) The value of the
text
field is the message that is sent.NOTE: Some special token values are supported. See Create Flow Webhook Task.
header (optional) Key-value pairs of headers to include in the HTTP request. sslVerification (optional) Set to true
if SSL verification should be completed. If not specified, the value istrue
.secretKey (optional) If enabled, this value should be set to the secret key to use. A response code of
201 - Created
is returned. The response body should look like the following:{ "sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1", "reason": "JobStarted", "jobGraph": { "vertices": [ 21, 22 ], "edges": [ { "source": 21, "target": 22 } ] }, "id": 962222, "jobs": { "data": [ { "id": 21 }, { "id": 22 } ] } }
Retain the
id
value, which is the job identifier, for monitoring.
Step - Run Job with Parameter Values
If the imported dataset or outputs have parameters defined for them, you can pass overrides of the default parameter values as part of the job definition.
- Acquire the internal identifier for the recipe for which you wish to execute a job. In the previous example, this identifier was
28629
. - Construct a request using the following:
Endpoint <protocol>://<platform_base_url>/v4/jobGroups
Authentication Required Method POST
Request Body:
{ "wrangledDataset": { "id": 28629 }, "overrides": { "runParameters": { "overrides": { "data": [{ "key": "varRegion", "value": "02" } ]} }, }, "ranfrom": null }
- In the above example, the specified job has been launched for recipe
. The run parameter28629
varRegion
has been set to02
for this specific job. Depending on how it's defined in the flow, this parameter could influence change either of the following:- The source for the imported dataset.
- The path for the generated output.
- For more information, see Overview of Parameterization.
A response code of
201 - Created
is returned. The response body should look like the following:{ "sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1", "reason": "JobStarted", "jobGraph": { "vertices": [ 21, 22 ], "edges": [ { "source": 21, "target": 22 } ] }, "id": 962223, "jobs": { "data": [ { "id": 21 }, { "id": 22 } ] } }
Retain the
id
value, which is the job identifier, for monitoring.
This page has no comments.