...
- Acquire the internal identifier for the recipe for which you wish to execute a job. In the previous example, this identifier was
23
. - Construct a request using the following:
Endpoint http://www.example.com:3005/v4/jobGroups
Authentication Required Method POST
Request Body:
Code Block { "wrangledDataset": { "id": 23 }, "overrides": { "execution": "photon", "profiler": true, "writesettings": [ { "path": "hdfs://hadoop:50070/trifacta/queryResults/admin@example.com/POS-r01.csv", "action": "create", "format": "csv", "compression": "none", "header": false, "asSingleFile": false } ] }, "ranfrom": null }
- In the above example, the specified job has been launched for recipe
23
to execute on the
running environment with profiling enabled.D s photon - Output format is CSV to the designated path. For more information on these properties, see API JobGroups Create v4.
- Output is written as a new file with no overwriting of previous files.
A response code of
201 - Created
is returned. The response body should look like the following:Code Block { "sessionId": "79276c31-c58c-4e79-ae5e-fed1a25ebca1", "reason": "JobStarted", "sessionIdjobGraph": "9c2c6220-ef2d-11e6-b644-6dbff703bdfc" { "idvertices": 3, [ 21, 22 ], "edges": [ { "source": 21, "target": 22 } ] }, "id": 3, "jobs": { "data": [ { "id": 21 }, { "id": 22 } ] } }
Retain the
id
value, which is the job identifier, for monitoring.
...