For more information, see Overview of Job Monitoring.
Filter by type:
Click one of the pre-defined filters to show datasets of the following types:
- All: All objects of the selected type to which you have access.
- Owned by me: All objects of the selected type that you own.
- Shared with me: All objects of the type that have been shared with you.
- Name: Name of the object.
- In Flows: Count of flows in which the object is in use.
- Source: Flow or datastore where the object is located.
- Last Updated: Timestamp of the last time that the object was modified.
- Details: Review details about the dataset. See Dataset Details Page.
Preview: Inspect a preview of the dataset.
NOTE: Preview is not available for binary format sources.
- Use in new Flow: (Imported dataset only) You can create a new flow and begin immediately wrangling the dataset. This step also creates a recipe in the flow.
- Add to Flow: Add the dataset to a new or existing flow.
Make a copy: Create a copy of the imported dataset. This option is not available for reference datasets.
- Edit name and description: Change the name and description of the dataset.
Edit data settings: If the source of the imported dataset required conversion to an internally supported format, you can modify settings related to that conversion process. For more information, see File Import Settings.
Tip: This setting applies primarily to binary file formats, such as PDF and Excel, or file formats that may require additional steps to convert into tabular data, such as JSON.
Delete Dataset: Delete the dataset.
Deleting a dataset cannot be undone.
Refresh Dataset: If available, this option refreshes the dataset's metadata with the latest source schema.
NOTE: When a dataset is refreshed, all samples associated with the dataset are deleted, whether the dataset has changed. Samples must be recreated in their recipes.
NOTE: If you attempt to refresh the schema of a parameterized dataset based on a set of files, only the schema for the first file is checked for changes. If changes are detected, the other files are contain those changes as well. This can lead to changes being assumed or undetected in later files and potential data corruption in the flow.
For more information, see Overview of Schema Management.