To prevent overwhelming the client or significantly impacting performance,
How Sampling Works
NOTE: Generated samples are created by executing jobs on the applicable running environment. Quick Scan samples are executed in
When a dataset is first created, a background job begins to generate a sample using the first set of rows of the dataset. This initial sample is data sample is usually very quick to generate, so that you can get to work right away on your transformations.
- The default sample is the initial sample.
- If the source data is larger than 10MB in size, a random sample is automatically generated for you when the recipe is first loaded in the Transformer page.
- The initial sample is selected by default. When the automatic random sample has finished generation, it can be manually selected for displayIf the recipe is a child recipe, then the Initial Data sample indicates the selected sample of the parent recipe.
- If your source of data is a directory containing multiple files, the initial sample for the combined dataset is generated from the first set of rows in the first filename listed in the directory.
The maximum number of files in a directory that can be read in the initial sample is limited by parameter for performance reasons.
- For more information, see Workspace Settings Page.
If you are wrangling a dataset with parameters, the initial sample loaded in the Transformer page is taken from the first matching dataset.
If the matching file is a multi-sheet Excel file, the sample is taken from the first sheet in the file.
- By default, each initial sample is either:
- 10 MB in size
- Limited by the maximum number of files
- The entire dataset
- If you are wrangling a dataset with parameters, the initial sample the source data is larger than 10MB in size, a random sample is automatically generated for you when the recipe is first loaded in the Transformer page is taken from the first matching dataset.
- The initial sample is selected by default. When the automatic random sample has finished generation, it can be manually selected for display.
- To change the sample size, see Change Recipe Sample Size.
Additional samples can be generated from the context panel on the right side of the Transformer page. Sample jobs are independent job executions. When a sample job succeeds or fails, a notification is displayed for you.
As you develop your recipe, you might need to take new samples of the data. For example, you might need to focus on the mismatched or invalid values that appear in a single column. Through the Transformer page, you can specify the type of sample that you wish to create and initiate the job to create the sample. This sampling job occurs in the background.
You can create a new sample at any time. When a sample is created, it is stored within your storage directory on the backend datastore.
NOTE: The Initial Data sample contains raw data from the source. Any generated sample is stored in JSONLines format with additional metadata on the sample. These different storage formats can result is differences between initial and generated sample sizes.
For more information on creating samples, see Samples Panel.
NOTE: When a flow is shared, its samples are shared with other users. However, if those users do not have access to the underlying files that back a sample, they do not have access to the sample and must create their own.
When a sample is generated, it is stored in the default storage layer in the
jobrun directory, assigned to the user who initiated the sample. For more information, see Overview of Storage.
Changing sample sizes
If needed, you can change the size of samples that are loaded into the browser your current recipe. You may need to reduce these sizes if you are experiencing performance problems or memory issues in the browser. For more information, see Change Recipe Sample Size.
Important notes on sampling
- Sampling Depending on the running environment, sampling jobs may incur costs. These costs may vary between
and your clustered running environments, depending on type of sample and cost of job execution.
D s photon
- When sampling from compressed data, the data is uncompressed and then expanded. As a result, the sample size reflects the uncompressed data.
- Changes to preceding steps that alter the number of rows or columns in your dataset can invalidate the current sample, which means that the sample is no longer a valid representation of the state of the dataset in the recipe. In this case,
automatically switches you back to the most recently collected sample that is currently valid. Details are below.
D s product
- Locate the in-progress sampling job in the Samples panel. Click X.
- Click the Jobs Job History icon in the left nav bar. Select Sample jobs. For more information, see Sample Jobs Page.
- Some advanced sampling options are available only with execution across a scan of the full dataset.
- Undo/redo do not change the sample state, even if the sample becomes invalid.
- When a new sample is generated, any Sort transformations that have been applied previously must be re-applied. Depending on the type of output, sort order may not be preserved. See Sort Rows.
- Samples taken from a dataset with parameters are limited to a maximum of 50 files when executed on the
running environment. You can modify parameters as they apply to sampling jobs. See Samples Panel.
D s photon
With each step that is added or modified to your recipe,
|D s product|
For more information on sample types, see Sample Types.
|D s also|