Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version r094



This page contains a set of tips for how to improve the overall performance of job execution.

Run jobs on the default running environment

When configuring a job,

D s product
analyzes the size of your dataset to determine the best of the available running environments on which to execute the job. This option is presented as the default option in the dialog. Unless you have specific reasons for doing otherwise, you should accept the default suggestion.

Filter data early

If you know that you are deleting some rows and columns from your dataset, add these transformation steps early in your recipe. This reduction simplifies working with the content through the application and, at execution, speeds the processing of the remaining valid data. Since you may be executing your job multiple times before it is finalized, it should also speed your development process.


  • To keep rows: The following example keeps all rows that lack a value in the id column:
    D trans
    p03Valuekeep matching rows
    p01ValueIs missing
    SearchTermFilter rows

See Filter Data.

Perform joins early

After you have filtered out unneeded rows and columns, join operations should be performed in your recipe.These steps bring together your data into a single consistent dataset. By doing them early in the process, you reduce the chance of having changes to your join keys impacting the results of your join operations. See Join Data.

Perform unions late

Union operations should generally be performed later in the recipe so that you have a small chance of changes to the union operation, including dataset refreshes, affecting the recipe and the output.