In the following sections, you can review short summaries of specific features and explore more detailed information on them.
These features can be applied to individual flows to simplify job execution.
Parameterization enables you to specify parameters that capture variability in your data source paths or names. For example, you can parameterize the names of folders in your filepaths to capture files within multiple folders. Or, you can parameterize your inputs to capture datasets named within a specific time range. Nested folders of data can be parameterized, too.
- dataset parameters: Parameterize the input paths to your data, allowing you to process data in parallel files and tables through the same flow.
- output parameters: Parameterize the output paths for your results.
flow parameters: Define parameters that can be applied in your flows, including recipe steps.
Tip: You can apply overrides to any parameter at the flow level. These parameter override values are applied to any parameter that is referenced within the flow for any supported parameter type.
NOTE: Some of the following may not be available in your product edition.
Use regular expressions or Patterns in your paths or queries to sources to capture a broader set of inputs.
|Wildcard||Replace parts of your paths or queries with wildcards.|
|Datetime||You can specify parameterized Datetime values in one of the supported formats.|
|Variable||Variable values can be specified as overrides during import, job execution, and output.|
Parameterization is available for the following:
NOTE: For relational data, parameterization is applied to custom SQL queries used to import the data. For more information, see Enable Custom SQL Query.
The scheduling feature, also known as Automator, enables you to schedule the execution of individual flows on a specified frequency. Frequencies can be specified through the Designer Cloud application through a simple interface or, if needed, in a modified form of cron syntax.
Tip: Automator is often used with parameterization to fully automate data preparation processes in Designer Cloud Enterprise Edition.
For more information, see Overview of Automator.
After a job has been launched, detailed monitoring permits you to track the progress of your job during all phases of execution. Status, job stats, inputs, outputs and a flow snapshot are available through the Designer Cloud application. For more information, see Overview of Job Monitoring.
After a job has completed, you can send email notifications to stakeholders based on the success or failure of the job.
NOTE: This feature must be enabled. See Workspace Settings Page.
These notifications are defined within Flow View. See Email Notifications Page.
Webhook notifications let you define outgoing HTTP messages to any REST API. The message form and body can be customized to include job execution metadata. For more information, see Create Flow Webhook Task.
The Deployment Manager is a separate environment that can be enabled for the execution of production flows under limited access. Flows in development are exported from your default (Dev) instance and then imported to the Production instance, the Deployment Manager, where you can configure the periodic execution of the flow. For more information, see Overview of Deployment Manager.
Orchestration is a set of functionality that supports the scheduled execution of jobs across multiple flows. These jobs could be external processes, other flows, or even HTTP requests.
|plan||A plan is a sequence of tasks that are executed on one or more flows to which you have access. To orchestrate tasks, you build a plan. A plan can be scheduled for execution, triggered manually, or invoked via API.|
|trigger||A task is executed based on a trigger. A trigger is a condition under which a task is executed. In many cases, the trigger for a task is based on the schedule for the plan.|
A task is a unit of execution in the platform. For example, one type of task is the execution of a flow, which executes all recipes in the flow, as well as the flow's upstream dependencies.
A snapshot of the plan is captured, and the plan is executed against this snapshot. For more information on snapshots, see "Plan execution" below.
The following types of tasks are available.
|flow task||An ad-hoc or scheduled execution of the transformations required to produce one or more selected outputs from a flow.|
|HTTP task||A request submitted to a third-party server as part of a plan run.|
- A plan or task cannot be shared.
- You cannot specify parameter overrides to be applied to plans specifically.
- Plans inherit parameter values from the objects referenced in the plan's tasks.
If overrides are applied to flow parameters, those overrides are passed to the plan at the time of flow execution.
Tip: Prior to plan execution, you can specify parameter overrides at the flow level. These values are passed through to the plan for execution. For more information, see Manage Parameters Dialog.
For this release:
- The only type of plan task that is supported is Run Flow.
- Tasks are defined in a linear, non-branching sequence based on the trigger.
You create a plan and schedule it using the following basic workflow.
- Create the plan. A plan is the container for definition of the tasks, triggers, and other objects. See Plans Page.
- In Plan View, you specify the objects that are part of your plan. See Plan View Page.
- Schedule: The schedule defines the set of triggers that queue the plan for execution.
- Trigger: A trigger defines the schedule and frequency at which the plan is executed. A plan can have multiple triggers (e.g. monthly versus weekly executions).
- Task(s): Next, you specify the tasks that are executed in order.
Flow task: A flow task includes the specification of the flow to run and the outputs from the flow to generate.
NOTE: You can select the outputs from the recipe that you wish to generate. You do not need to generate all outputs.
NOTE: When a flow task is executed, the execution plan works back from the selected outputs to execute all of the recipes required to generate the output, including the upstream dependencies of those recipes.
- HTTP task: An HTTP task is a request issued when it is triggered from the application to a target endpoint. This request supports a variety of API methods. See Plan View for HTTP Tasks.
- Continue building tasks in a sequence.
- Schedule: The schedule defines the set of triggers that queue the plan for execution.
- As needed, you can apply override values to any flow parameters that are included in the tasks of your recipe. These overrides are applied during a plan run. For more information, see Manage Parameters Dialog.
- To test:
- Click Run now.
- To track progress, click the Runs link.
- In the Run Details page, you can track the progress.
- The first task is executed and completes, before the second task is started.
- Individual tasks are executed as separate jobs, which you can track through the Jobs page. See Jobs Page.
- When the plan has completed, you can verify the results through the Job details page. See Job Details Page.
- If you are satisfied with the plan definition and your test run, the plan will execute according to the scheduled trigger.
Through the Plan View page, you can configure the scheduled executions of the plan. Plan schedules are defined using triggers.
- These schedules are independent of schedules for individual flows.
- You cannot create schedules for individual tasks.
When a plan is triggered for execution, a snapshot of the plan is taken. This snapshot is used to execute the plan. Tasks are executed in the sequence listed in Plan View.
NOTE: Any subsequent changes to the flows, datasets, recipes, and outputs referenced in the plan's tasks can affect subsequent executions of the plan. For example, subsequent removal of a dataset in a flow referenced in a task can cause the task to fail to execute properly.
At the flow level, you can define webhooks and email notifications that are triggered based on the successful generation of outputs. When you execute a plan containing an output with one of these messages, the message is triggered and delivered to stakeholders.
NOTE: Webhook messages and email notification cannot be directly triggered based on a plan's execution. However, you can create HTTP-based tasks to send messages based on a plan task's execution.
Tip: When a flow email notification is triggered through a plan, the internal identifier for the plan is included in the email.
See "Webhooks" and "Email notifications" above.
Enable the following setting:
Plan sharing, import, and export must also be enabled.For more information, see Workspace Settings Page.
The following flags must be enabled for the orchestration service to correctly function.
- You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json. For more information, see Platform Configuration Methods.
Locate the following settings. Verify that they are set to
"webapp.orchestrationWorkers.enabled": true, "orchestration-service.enabled": true, "orchestration-service.autoRestart": true,
You can choose to enable the following task types:
Task type Setting Description HTTP task
true, you can configure plan tasks to deliver a REST request over HTTP or HTTPS to a specified endpoint, including endpoints in the Designer Cloud Powered by Trifacta platform.
Flow task This feature is automatically enabled when
Plans featureis enabled. See above.
These tasks execute a specific output on a selected flow.
- Save your changes and restart the platform.
Logging information on plan execution is captured in the
orchestration-service.log. This log file can be downloaded as part of a Support Bundle. For more information, see Support Bundle Contents.
You can configure aspects of how this log captures service events. For more information, see Configure Logging for Services.
This page has no comments.