...
- Acquire the target URL for the datastore through the
or through the datastore itself. Examples URLs:D s webapp r true HDFS (file):
Code Block hdfs:///user/warehouse/campaign_data/d000001_01.csv
S3 (directory):
Code Block s3:///3fad-demo/data/biosci/source/
Navigate the browser to the appropriate URL in the
. The following example applies to the HDFS file example from above. It must be preceded by the base URL for the platform. For more information, see API - UI Integrations.D s platform Code Block <base_url>/import/data?uri=hdfs:///user/warehouse/campaign_data/d000001_01.csv
- For file-based URLs, the file is selected automatically.
- For directory-based URLs, the user can select which ones to include through the browser. Click the Add Datasets to a Flow. Add the dataset to an existing flow or create a new one for it.
- After the datasets have been imported, open the flow in which your import is located. For the datasets that you wish to execute, you should do the following in the Flow View page:
- Click the icon for the dataset.
- From the URL, retrieve the identifiers for the flow and the dataset. These values are needed for later execution through the command line interface.
Example:
Dataset URL flowId datasetId http://latest-dev.trifacta.netexample.com:3005/flows/31#dataset=186
31
186
The flowId is consistent across all datasets that you imported through the above steps.
- You can open the datasets and wrangle them as needed.
- Complete any required actions from within your source application.
...