Page tree

 

Contents:


Operationalization refers to a general class of platform features that enable repeated application of  Trifacta® Wrangler Enterprise on production data. Whether deployed in a single flow or across all flows in your environment, operationalization features broaden the scope of wrangled data, simplify job execution, and enable these processes on a repeated or scheduled basis.

In the following sections, you can review short summaries of specific features and explore more detailed information on them.

Single-Flow Operations

These features can be applied to individual flows to simplify job execution.

Parameterization

Parameterization enables you to specify parameters that capture variability in your data source paths or names. For example, you can parameterize the names of folders in your filepaths to capture files within multiple folders. Or, you can parameterize your inputs to capture datasets named within a specific time range. Nested folders of data can be parameterized, too.

Parameter types:

  • dataset parameters: Parameterize the input paths to your data, allowing you to process data in parallel files and tables through the same flow.
  • output parameters: Parameterize the output paths for your results.
  • flow parameters: Define parameters that can be applied in your flows, including recipe steps.

    Tip: You can apply overrides to any parameter at the flow level. These parameter override values are applied to any parameter that is referenced within the flow for any supported parameter type.

Parameter formats:

NOTE: Some of the following may not be available in your product edition.

Parameter TypeDescription
Pattern

Use regular expressions or Trifacta patterns in your paths or queries to sources to capture a broader set of inputs.

WildcardReplace parts of your paths or queries with wildcards.
DatetimeYou can specify parameterized Datetime values in one of the supported formats.
VariableVariable values can be specified as overrides during import, job execution, and output.


Parameterization is available for the following:

File systems

Input

Output

Date/time

Pattern

Variable

Timestamp

Variable


Relational sources

Input

Output

Timestamp

Variable

Timestamp

Variable

NOTE: For relational data, parameterization is applied to custom SQL queries used to import the data. For more information, see Enable Custom SQL Query.

For more information, see Overview of Parameterization.

Scheduling

The scheduling feature, also known as Automator, enables you to schedule the execution of individual flows on a specified frequency. Frequencies can be specified through the Trifacta application through a simple interface or, if needed, in a modified form of cron syntax.

Tip: Automator  is often used with parameterization to fully automate data preparation processes in Trifacta Wrangler Enterprise.

For more information, see Overview of Automator.

Job monitoring

After a job has been launched, detailed monitoring permits you to track the progress of your job during all phases of execution. Status, job stats, inputs, outputs and a flow snapshot are available through the Trifacta application.For more information, see Overview of Job Monitoring.

Email notifications

After a job has completed, you can send email notifications to stakeholders based on the success or failure of the job.

NOTE: This feature must be enabled. See Workspace Settings Page.

These notifications are defined within Flow View. See Email Notifications Page.

Webhooks

Webhook notifications let you define outgoing HTTP messages to any REST API. The message form and body can be customized to include job execution metadata. For more information, see  Create Flow Webhook Task.


Deployment Manager

The Deployment Manager is a separate environment that can be enabled for the execution of production flows under limited access. Flows in development are exported from your default (Dev) instance and then imported to the Production instance, the Deployment Manager, where you can configure the periodic execution of the flow. For more information, see Overview of Deployment Manager.


Orchestration

Orchestration is a set of functionality that supports the scheduled execution of jobs across multiple flows. These jobs could be external processes, other flows, or even HTTP requests.

Terms

TermDescription
planA plan is a sequence of tasks that are executed on one or more flows to which you have access. To orchestrate tasks, you build a plan. A plan can be scheduled for execution, triggered manually, or invoked via API.
triggerA task is executed based on a trigger. A trigger is a condition under which a task is executed. In many cases, the trigger for a task is based on the schedule for the plan.
task

A task is a unit of execution in the platform. For example, one type of task is the execution of a flow, which executes all recipes in the flow, as well as the flow's upstream dependencies.

snapshot

When a plan is activated, a snapshot of the plan is created. The plan is executed against this snapshot. For more information on snapshots, see "Plan execution" below.

Limitations

  • A plan or task cannot be shared.
  • You cannot specify parameter overrides to be applied to plans specifically.
    • Plans inherit parameter values from the objects referenced in the plan's tasks.
    • If overrides are applied to flow parameters, those overrides are passed to the plan at the time of flow execution.

      Tip: Prior to plan execution, you can specify parameter overrides at the flow level. These values are passed through to the plan for execution. For more information, see Manage Parameters Dialog.

For this release:

  • The only type of plan task that is supported is Run Flow.
  • Tasks are defined in a linear, non-branching sequence based on the trigger.

Basic workflow

You create a plan and schedule it using the following basic workflow.

  1. Create the plan. A plan is the container for definition of the tasks, triggers, and other objects. See Plans Page.
  2. In Plan View, you specify the objects that are part of your plan. See Plan View Page.
    1. Schedule: The schedule defines the set of triggers that queue the plan for execution.
      1. Trigger: A trigger defines the schedule and frequency at which the plan is executed. A plan can have multiple triggers (e.g. monthly versus weekly executions).
    2. Task(s): Next, you specify the tasks that are executed in order.
      1. A task includes the specification of the flow to run and the outputs from the flow to generate.

        NOTE: You can select the outputs from the recipe that you wish to generate. You do not need to generate all outputs.

        NOTE: When a task is executed, the execution plan works back from the selected outputs to execute all of the recipes required to generate the output, including the upstream dependencies of those recipes.

      2. Continue building tasks in a sequence.
  3. When you have finished, you must activate the plan, which means that the current version of the plan is executed at the appropriate time. Click Activate.
  4. To test:
    1. Click Run now.
    2. To track progress, click the Runs link.
    3. In the Run Details page, you can track the progress.
    4. The first task is executed and completes, before the second task is started.
    5. Individual tasks are executed as separate jobs, which you can track through the Jobs page. See Jobs Page.
    6. When the plan has completed, you can verify the results through the Job details page. See Job Details Page.
  5. If you are satisfied with the plan definition and your test run, the plan will execute according to the scheduled trigger.

Plan scheduling

Through the Plan View page, you can configure the scheduled executions of the plan. Plan schedules are defined using triggers.

  • These schedules are independent of schedules for individual flows.
  • You cannot create schedules for individual tasks.

Plan execution

When a plan is triggered for execution, a snapshot of the plan is taken. This snapshot is used to execute the plan. Tasks are executed in the sequence listed in Plan View. If a task fails, no subsequent tasks are executed.

Important notes:

NOTE: After a plan is activated, any subsequent changes to the flows, datasets, recipes, and outputs referenced in the plan's tasks can affect subsequent executions of the plan. For example, subsequent removal of a dataset in a flow referenced in a task can cause the task to fail to execute properly.

NOTE: When a specific flow in a plan is triggered for execution, another snapshot of the flow is taken. The job is executed against this snapshot. Since this snapshot contains images of the flow's objects, subsequent changes to the flow after the flow snapshot is taken do not affect jobs that are in progress.

At the flow level, you can define webhooks and email notifications that are triggered based on the successful generation of outputs. When you execute a plan containing an output with one of these messages, the message is triggered and delivered to stakeholders.

NOTE: Webhook messages and email notification cannot be directly triggered based on a plan's execution. However, when a flow containing one of these messages is executed within a plan, the messages are triggered and delivered naturally.

Tip: When a flow email notification is triggered through a plan, the internal identifier for the plan is included in the email.


See "Webhooks" and "Email notifications" above.

This page has no comments.