In some situations, you may need to create a sequence of datasets, in which the output of one recipe becomes the input of another recipe.
Depending on your situation, you can apply one of the following solutions.
Within a flow, you can chain together recipes. For example, you may wish to use the first recipe for cleansing and then second recipe for transforming. This method is useful if you are using a single imported dataset for multiple types of transformations within the same flow.
The output of
Cleanse recipe becomes the input of
If you need to make the output of a recipe available in other flows, you can create a reference object. This reference is available in other flows that you control.
Click the Create Reference icon:
Create reference object
If the user running the job in flow #2 does not have permissions to access all of the upstream dependencies of the reference dataset, the job may fail. These dependencies include imported datasets and any connections.
If any of the above considerations are a concern, you can create an imported dataset from the job results of flow #1.
In the Job Details page, click the Output Destinations tab. For the generated output, select Create imported dataset from its context menu.
NOTE: When the new dataset is created, it is accessible only to the creator. Datasets can be shared with other collaborators. For more information, see Overview of Sharing.
In flow 2, you can create a parameterized dataset, which collects source data, with some variation in parameters. As long as the output of flow #1 follows the naming convention for the parameterized dataset for flow #2, you should be able to run the job on fresh data on-demand. For more information, see Overview of Parameterization.
See Job Details Page.