Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version next


  1. Acquire the internal identifier for the recipe for which you wish to execute a job. In the previous example, this identifier was 23.
  2. Construct a request using the following:



    Request Body:

    Code Block
      "wrangledDataset": {
        "id": 23
      "overrides": {
        "execution": "photon",
        "profiler": true,
        "writesettings": [
            "path": "hdfs://hadoop:50070/trifacta/queryResults/admin@trifacta.local/cdr_txt.csv",
            "action": "create",
            "format": "csv",
            "compression": "none",
            "header": false,
            "asSingleFile": false
      "ranfrom": null
  3.  In the above example, the specified job has been launched for recipe 23 to execute on the Photon running environment with profiling enabled. 
    1. Output format is CSV to the designated path. For more information on these properties, see API JobGroups Create v3.
    2. Output is written as a new file with no overwriting of previous files.
  4. A response code of 201 - Created is returned. The response body should look like the following:

    Code Block
      "jobgroupId": 3,
      "jobIds": [
      "reason": "JobStarted",
      "sessionId": "9c2c6220-ef2d-11e6-b644-6dbff703bdfc"
  5. Retain the jobgroupId value for monitoring.