Through the Flow View page, you can access and manage all objects in your flow. For each imported dataset, recipe, or other object in your flow, you can perform a variety of actions to effectively manage flow development and job execution through a single page in the .

If you have enabled Deployment Manager, avoid making changes in Flow View on a Production instance of the platform.

  • Scheduling executions through Flow View in a Prod environment is not supported. Job executions must be executed through the APIs. See API Reference.
  • Some Flow View options may not be available in a Prod environment.
  • You should apply changes to your flow in the Dev instance and then re-deploy to Production. For more information, see Overview of Deployment Manager.

NOTE: If the displayed flow has been shared with you, some options are not available.

Flow View page

The imported datasets in the flow or reference datasets added to the flow are listed on the left side of the screen. Associated with each dataset can be one or more recipes, which are used to transform the source data.

NOTE: Objects marked with a red dot indicate a problem with the object's configuration. Please select the object to begin investigating the error. Error information may be displayed in the right panel.



A recipe is a set of steps to transform source data into the results you desire.

For more information on these objects, see Object Overview.

Select an object from your flow to open an object-specific panel on the right side of the screen.

Tip: You can right-click any object in Flow View to see the list of available actions that appear when you select it and choose from the right panel.

Tip: Double-click any recipe to edit it. See Transformer Page.


Rename: Select the name of the object to rename it within the platform. This rename does not apply to the source of the object, if it exists elsewhere. 

Add Datasets: Click to add new datasets to the flow. Details are below.

From the flow's context menu:

Schedule Flow: To add a scheduled execution of the recipes in your flow:

Configure webhook tasks: You can define tasks to update third-party applications of the results of jobs executed from this flow. For more information, see Create Flow Webhook Task.

NOTE: Webhooks may need to be enabled in your environment. For more information, see Workspace Settings Page.

Share Flow: Collaborate with others on the same flow.

You can also send a copy to other users for separate work.

NOTE: When a flow containing one or more connections is shared, its connections are also shared. By default, credentials are included. If the sharing of credentials has been disabled, the new users must provide their own credentials for the shared connection. See Configure Sharing.

See Share Flow Dialog.

When a user is given access to a flow, all of the following actions are available to that user, except for editing details and deleting the flow.

Manage parameters: Create and manage flow parameters, as well as specify overrides for them. See Manage Parameters Dialog.

Manage email notifications: Configure types of jobs that generate success or failure emails and who receives the messages. See Manage Flow Notifications Dialog.

NOTE: This feature uses an SMTP email server to send messages. For more information on configuring the server, see Enable SMTP Email Server Integration.

NOTE: This feature may need to be enabled in your environment. For more information, see Workspace Settings Page.

Make a copy: Create a copy of the flow.

Export Flow: (Available to flow owner only) Export the flow for archive or transfer. For more information, see Export Flow.Move to: Move the flow to a new or existing folder. See Manage Flows with Folders.

Edit Flow name and description:  (Available to flow owner only) Change the name and description of the flow.

Delete Flow:  (Available to flow owner only) Delete the flow.

Deleting a flow removes all recipes that are contained in the flow. If copies of these objects exist in other flows, they are not touched. Imported datasets are not deleted by this action.

Try new flow view/Go back to classic flow view: Switch between the new version of Flow View and classic Flow View. 

Administrators can disable access to the new version of Flow View. For more information, see Miscellaneous Configuration.

Add Datasets to Flow

From the Flow View page, you can add imported or reference datasets to your flow. These datasets are added as independent objects in the flow and can be joined, unioned, or referenced by other datasets in the flow.

Add datasets to current flow

For large relational datasets, you can monitor the import process through the Flow View page.

NOTE: This feature may require enablement in your deployment. See Configure JDBC Ingestion.

For more information, see Overview of Job Monitoring.

View for Imported Datasets

When you select an imported dataset, you can preview the data contained in it, replace the source object, and more from the right-side panel.

Imported Dataset view

Key Fields:


View for Datasets with Parameters

Flow View for any flow containing a dataset with parameters has some variations. For more information on these objects, see Overview of Parameterization.

Parameters Panel

In addition to the standard view of your flow, the Parameters panel contains information about the parameters that are applied in the flow. This panel shows:

Parameters Panel in Flow View

Variable Overrides:

The above information is useful for reviewing parameters and specifying overrides at execution time.

For each variable, the default variable value or, if one is specified, the overriding value, is applied. A variable can have an empty value.

NOTE: When you edit an imported dataset, if a variable is renamed, a new variable is created using the new name. Any override values assigned under the old variable name for the dataset must be re-applied. Instances of the variable and override values used in other imported datasets remain unchanged.

Variables are applied whenever:

To change the value that is applied when a job is executed, you can:

For more information on executing jobs via API, see API Workflow - Run Job on Dataset with Parameters.

Parameters tab

When you select a dataset with parameters in Flow View, you can review the parameters that have been specified for the selected dataset in the right panel.

Parameters tab in Flow View


View for Recipes

For each recipe, you can review or edit its steps or create new recipes altogether. You can also create references to the recipe, modify outputs, and create new recipes off of the recipe.

When you select a recipe:

Recipe view


Recipe tab

Preview the first steps in the recipe.

Key Fields:

Data tab

Preview the data as reflected by the recipe.

NOTE: To render this data preview, some of the data must be loaded, and all steps in the recipe must be executed to generate the preview. Some delays may be expected.

Key Fields:

Target tab

When a target has been assigned for this recipe, you can review its schema information in the Target tab. This tab appears only after a target has been assigned to the recipe.

To remove the current target, select Remove Target from the context menu.


View for Outputs

Associated with each recipe is one or more outputs, which are publishing destinations. Through outputs, you can execute and track jobs for the related recipe.

Destinations tab

The Destinations tab contains all configured destinations associated with the recipe. 

Destinations tab

Key Fields:  

For more information, see  Run Job Page.

Scheduled destinations:

If a schedule has been defined for the flow, these destinations are populated with results whenever the schedule is triggered and the associated recipe is successfully executed. If any input datasets are missing, the job is not run.

NOTE: Flow collaborators cannot modify publishing destinations.

See Add Schedule Dialog.

For more information, see Overview of Automator.


Jobs tab

Jobs tab

Each entry in the Jobs tab identifies a job that has been queued for execution for the selected output. You can track the progress, success, or failure of execution.

Tip: When you hover the mouse over a job link, you can review details of the job in progress. For more information, see Overview of Job Monitoring.

When a job has finished execution you can review the results. Click the link to the job. For more information, see Job Details Page.


For a job, you can do the following:

View Results: Click to view the results of your completed job. For more information, see Job Details Page.

Cancel job: Select to cancel a job that is currently being executed.

Delete job: Delete the job from the platform.

Deleting a job cannot be undone.

NOTE: This feature may not be enabled in your instance of the platform. For more information, please contact your . See Miscellaneous Configuration.

Download logs: Download the logs for the job. If the job is in progress, log information is likely to be incomplete.

Tip: When jobs fail, the downloaded package includes additional configuration files and service logs to assist in debugging job execution issues. For more information, see Support Bundle Contents.

View for References

When you select a recipe, you can choose to create a reference dataset off of that recipe. A reference dataset is a dataset that is a reference to the output generated from a recipe contained in another flow. Whenever the upstream recipe and its output data are changed, the results are automatically inherited through the reference to the reference dataset. 

NOTE: You cannot select or use a reference dataset until a reference has been created in the source flow from the recipe to use.

To create a reference dataset from a recipe, click the Paper Clip icon. The following options appear in the right panel.

Reference view

Key Fields:

Used In: Indicates the number of flows where the reference appears. If this number is greater than one, click More details to review the flows. See Dataset Details Page.


Add to Flow: Click to add the reference dataset to a new or existing flow.

Edit name and description: (Available to flow owner only) Change the name and description for the object.


Delete Reference Dataset: Remove the reference dataset from the flow.

Deleting a reference dataset in the source flow causes all references to it to be broken in the flows where it is referenced. These broken references should be fixed by swapping in new sources.

View for Unstructured Datasets

An unstructured dataset is an imported dataset that does not contain any initial parsing steps. All parsing steps must be added through recipes that are applied to the dataset. 

Tip: You can remove initial parsing during import or through the context menu for an imported dataset. See Initial Parsing Steps.

Unstructured Dataset view

Key Fields:

Data Preview: In the Data Preview window, you can see a small section of the data that is contained in the imported dataset. This window can be useful for verifying that you are looking at the proper data.

Tip: Click the preview to open a larger dialog, where you can select and copy data.

Type: Indicates where the data is sourced or the type of file.

File Size: Size of the file. Units may vary.

Location: Path to the location of the imported dataset.


View for Connections

For flows that require connections to source data, you can review the details of the connection, whether you created it or it was shared with you.

Select the imported dataset that uses the connection. Then, in the context panel, click the name of the connection.

Connections view

Key Fields:

Connection Type: For more information, see Connection Types.

Owner: User that owns the connection. This user can modify connection properties.

Server information: You can review information about the source to which the connection links.


Private - Connection is available for use only for specified users of the platform.

Public - Connection is available for all users.

For more information, see Share Connection Window.


Edit Connection: Select to modify the connection.

NOTE: For shared connections, you may only modify the username and password if they were not provided to you. All other fields are read-only.

Share...: Click to share the connection with other users.

NOTE: You can share connections that have been shared with you. You cannot modify their properties.

Tip: If groups have been enabled in your instance of the , you can share flows and connections to LDAP groups. For more information, see Configure Users and Groups.

See Share Connection Window.

View for Reference Datasets

reference dataset is a reference to a recipe's outputs that has been added to a flow other than the one where the recipe is located.

NOTE: A reference dataset is a read-only object in the flow where it is referenced. You cannot select or use a reference dataset until a reference has been created in the source flow from the recipe to use. See View for Recipes above.

To add a reference dataset, you can:

From the source flow, select the reference object for a recipe. In the context panel, click Add to Flow....

Click Add Datasets from the main Flow View page and select one from a different flow.

View for referenced dataset in a new flow

NOTE: Reference datasets marked with a red dot no longer have a source dataset for them in the other flow. These upstream dependencies should be fixed. See Fix Dependency Issues.

When you select a reference dataset in flow view, the following are available in the right-hand panel.

Key Fields:

Source Flow: Flow that contains the dataset. Click the link to open the Flow View page for that dataset.


Add new Recipe: Add a new recipe for the object. If a recipe already exists for it, this new recipe is created as a branch in the flow.

Remove from Flow: Remove the reference dataset from the flow. The source dataset in the other flow is untouched.

Go to original reference: Open in Flow View the flow containing the original dataset for this reference.