When you are ready to apply your recipe across your entire dataset, you run a job. When your recipe is finalized, you can schedule a job for regular execution, so that downstream stakeholders are assured of having fresh data.

Job Execution Process

A job is a complex set of tasks to ingest your data from its datasources and deliver your data and recipe to the selected running environment for execution.

A running environment is an execution engine designed for transforming large datasets based on a set of scripted steps. A running environment can be:

Output objects

A job is executed through an output object, which is required for every job. 

Tip: If an output object does not exist for the job you are trying to run, the creates one for you.

An output object definition includes the following:

Tip: A "job" encompasses multiple sub-jobs, which manage the processes of ingestion, conversion, transfer, transformation, profiling, and generating of results as needed to complete the job.

Job types

Jobs can be of the following types:

NOTE: Both types of jobs require output objects. For any recipe, you can create different output destinations for manual or scheduled jobs.

Tip: Jobs can also be triggered using REST APIs, if you prefer to handle job scheduling outside of the .

Run Job to Generate Results

NOTE: Running a job consumes resources. Depending on your environment, resource consumption may cost money. Your project owner or workspace administrator may be able to provide guidance on resources and their costs.

To run a job right now, you can do either of the following:

  1. In the Transformer page, click Run
  2. In Flow View, click the output to generate. In the right panel, click Run.

Tip: By default, a manual job generates a CSV with visual profiling to the default output location using the optimal running environment for the job size. In the Run Job page, you can define or update your output object and its publishing actions, as needed.

For more information, see Generate Results.


Schedule Jobs

Through Flow View, you can create outputs for your scheduled destinations and define the schedule for when those outputs are generated. 

For more information, see Schedule Jobs.

Parameterize Your 

In the , a parameter is a storage object that can be defined to capture a variable, a pattern or wildcard, or a set of timestamp values. You can apply parameters to:

For example, if you are having a set of files stored with parallel names in a single directory, you can create a dataset with parameters to capture all of these files into a single dataset. So, instead of having to union all of the files together (and re-union them if new files are added), you can create a single imported dataset object to capture all of them, and if new files added to the directory follow the same pattern, the dataset with parameters gets automatically updated.

For more information, see Parameters.

Orchestrate Job Sequences

You can use plans to orchestrate sequences of job executions. A plan is a sequence of tasks executed in the . In addition to flow tasks, which execute specific outputs, you can create HTTP tasks to message external systems or, if needed, to execute REST API endpoints within the 

For more information, see Plans and Tasks.

Schedule Plans

Plans can be scheduled, too.

See Plans and Tasks.