NOTE: For file-based sources, Trifacta® expects that each row of data in the import file is terminated with a consistent newline character, including the last one in the file.
For single files lacking this final newline character, the final record may be dropped.
- For multi-file imports lacking a newline in the final record of a file, this final record may be merged with the first one in the next file and then dropped in the Trifacta Photon running environment.
NOTE: To be able to import datasets from the base storage layer, your user account must include the
NOTE: An imported dataset requires about 15 rows to properly infer column data types and the row, if any, to use for column headers.
File and path limitations:
- The colon character (
:) cannot appear in a filename or a file path.
- Filenames cannot begin with special characters like dot (
.) or underscore(
1. Connect to sources
NOTE: Compressed files are recognized and can be imported based on their file extensions.
Upload: Trifacta® can also load files from your local file system.
Tip: You can drag and drop files from your desktop to to upload them.
NOTE: When you upload an updated version of a previously uploaded file, the new file is stored as a separate upload altogether. In your flow, you must swap out the old dataset to point to the new one.
HDFS: If connected to a Hadoop cluster, you can select file(s) or folders to import. See HDFS Browser.
Hive: If connected to a Hive instance, you can load datasets from individual tables within the set of Hive databases. See Hive Browser.
S3: If connected to an S3 instance, you can browse your S3 buckets to select source files.
Tip: For HDFS and S3, you can select folders, which selects each file within the directory as a separate dataset.
See S3 Browser.
Redshift: If connected to an S3 datawarehouse, you can import source from the connected database. See Redshift Browser.
WASB: If enabled, you can import data into your Azure deployment from WASB. For more information, see WASB Browser.
ADL: If enabled, you can import data into your Azure deployment from ADLS Gen1. See ADLS Gen1 Browser.
ADLS Gen2: If enabled, you can import data into your Azure deployment from ADLS Gen1. See ADLS Gen2 Browser.
Alation: If connected to Alation, you can search for and import Hive tables as imported datasets. For more information, see Using Alation.
Waterline: If connected to Waterline, you can search for and import datasets through the data catalog. For more information, Using Waterline.
Databases: If connected to a relational datastore, you can load tables or views from your database. See Database Browser.
NOTE: For long-loading relational sources, you can monitor progress through each stage of ingestion. After these sources are ingested, subsequent steps to import and wrangle the data may be faster.For more information, see Configure JDBC Ingestion.
For more information, see Overview of Job Monitoring.
For more information on the supported input formats, see Supported File Formats.
New/Edit: Click to create or edit a connection. By default, the displayed connections support import.
Search: Enter a search term to locate a specific connection.
NOTE: This feature may be disabled in your environment. For more information, contact your Trifacta administrator.
2. Add datasets
When you have found your source directory or file:
You can hover over the name of a file to preview its contents.
NOTE: Preview may not be available for some sources, such as Parquet.
Click the Plus icon next to the directory or filename to add it as a dataset.
Tip: You can import multiple datasets at the same time. See below.
Excel files: Click the Plus icon next to the parent workbook to add all of the worksheets as a single dataset, or you can add individual sheets as individual datasets. See Import Excel Data.
If custom SQL query is enabled, you can click Create Dataset with SQL to enter a customized SQL statement to pre-filter the table within the database to include only the rows and columns of interest.
Through this interface, it is possible to enter SQL statements that can delete data, change table schemas, or otherwise corrupt the targeted database. Please use this feature with caution.
For more information, see Create Dataset with SQL.
If parameterization has been enabled, you can apply parameters to the source paths of your datasets to capture a wider set of sources. Click Create Dataset with Parameters. See Create Dataset with Parameters.
3. Configure selections
When a dataset has been selected, the following fields appear on the right side of the screen. Modify as needed:
- Dataset Name: This name appears in the interface.
Dataset Description: You may add an optional description that provides additional detail about the dataset. This information is visible in some areas of the interface.
Tip: Click the Eye icon to inspect the contents of the dataset prior to importing.
You can select a single dataset or multiple datasets for import.
You can modify settings used during import for individual files. In the card for an individual dataset, click Edit Settings.
NOTE: In some cases, there may be discrepancies between row counts in the previewed data versus the data grid after the dataset has been imported, due to rounding in row counts performed in the preview.
- Per-file encoding: By default, Trifacta attempts to interpret the encoding used in the file. In some cases, the data preview panel may contain garbled data, due to a mismatch in encodings. In the Data Preview dialog, you can select a different encoding for the file. When the correct encoding is selected, the preview displays the data as expected.
For more information on supported encodings, see Configure Global File Encoding Type.
- Detect structure: By default, Trifacta attempts to interpret the structure of your data during import. This structuring attempts to apply an initial tabular structure to the dataset.
- Unless you have specific problems with the initial structure, you should leave the Detect structure setting enabled. Recipes created from these imported datasets automatically include the structuring as the first, hidden steps. These steps are not available for editing, although you can remove them through the Recipe panel. See Recipe Panel.
- When detecting structure is disabled, imported datasets whose schema has not been detected are labeled, unstructured datasets. When recipes are created for these unstructured datasets, the structuring steps are added into the recipe and can be edited as needed.
- For more information, see Initial Parsing Steps.
Remove special characters from column names: When selected, characters that are not alphanumeric or underscores are stripped, and space characters are converted to underscores.
Tip: This feature matches the column renaming behavior in Release 5.0 and earlier.
For more information, see Sanitize Column Names.
- Column data type inference: You can choose whether or not to apply Trifacta type inference to your individual dataset.
- In the preview panel, you can see the data type that is to be applied after the dataset is imported. This data type may change depending on whether column data type inference is enabled or disabled for the dataset.
To enable Trifacta type inference, select the Column Data Type Inference checkbox.
Tip: To see the effects of Trifacta type inference, you can toggle the checkbox and review data type listed at the top of individual columns. To override an individual column's data type, click the data type name and select a new value.
- You can configure the default use of type inference at the individual connection level. For more information, see Create Connection Window.
- For schematized sources that do not require connections, such as uploaded Avro files, the default setting is determined by the global setting for initial type inference. For more information, see Configure Type Inference.
4. Import selections
If you have selected a single dataset for import:
- To immediately wrangle it, click Import & Wrangle. The dataset is imported. A recipe is created for it, added to a flow, and loaded in the Transformer page for wrangling. See Transformer Page.
- To import the dataset, click Import. The imported dataset is created. You can add it to a flow and create a recipe for it later. See Library Page.
You can import multiple datasets from multiple sources at the same time. In the Import Data page, continue selecting sources, and additional dataset cards are added to the right panel.
NOTE: If you are importing from multiple files at the same time, the files are not necessarily read in a regular or predictable order.
NOTE: When you import a dataset with parameters from multiple files, only the first matching file is displayed in the right panel.
In the right panel, you can see a preview of each dataset and make changes as needed.
If you have selected multiple datasets for import:
- To import the selected datasets, click Import Datasets. The imported datasets are created. You can begin working with these imported datasets now or at a later time. If you are not wrangling the datasets immediately, the datasets you just imported are listed at the top of the Library page. See Library Page.
- To import the selected datasets and add them to a flow:
- Click the Add Dataset to a Flow checkbox.
- Click the textbox to see the available flows, or start typing a new name.
- Click Import & Add to Flow.
Create or select the flow to which to add them:
- Filter by Type: Click one of the tabs in the dialog to display only the applicable flows.
- Search: Start typing letters to filter the list of flows.
- Create new flow: Enter a name and description for the new flow to which to add the datasets.
- To add the datasets to the selected flow, click Add.
- The datasets are imported, and the associated recipes are created. These datasets and recipes are added to the selected flow.
- For any dataset that has been added to a flow, you can review and perform actions on it. See Flow View Page.
- To remove a dataset from import, click the X in the dataset card.
This page has no comments.