This section describes how you interact through the with your ADLS environment.
The can use ADLS for the following reading and writing tasks:
In the , ADLS is accessed through the ADLS browser. See ADLS Gen1 Browser.
NOTE: When the executes a job on a dataset, the source data is untouched. Results are written to a new location, so that no data is disturbed by the process.
Read/Write Access: Your HDI administrator must configure read/write permissions to locations in ADLS. Please see the ADLS documentation.
Your HDI administrator should provide a place or mechanism for raw data to be uploaded to your HDI datastore.
Your HDI administrator should provide a writeable home output directory for you, which you can review. See Storage Config Page.
Depending on the security features you've enabled, the technical methods by which access ADLS may vary. For more information, see Enable ADLS Gen1 Access.
Your HDI administrator should provide raw data or locations and access for storing raw data within ADLS. All should have a clear understanding of the folder structure within ADLS where each individual can read from and write their job results.
NOTE: The does not modify source data in ADLS. Sources stored in ADLS are read without modification from their source locations, and sources that are uploaded to the platform are stored in
You can create a dataset from one or more files stored in ADLS.
You can parameterize your input paths to import source files as part of the same imported dataset. For more information, see Overview of Parameterization.
NOTE: Avoid including spaces in the paths to your ADLS sources. Spaces in the path value can cause errors during execution on Databricks.
When you select a folder in ADLS to create your dataset, you select all files in the folder to be included. Notes:
*_FAILEDfiles, which may be present if the folder has been populated by HDI.
_), these files cannot be read during batch transformation and are ignored. Please rename these files through ADLS so that they do not begin with an underscore.
When creating a dataset, you can choose to read data in from a source stored from ADLS or from a local file.
/trifacta/uploadswhere they remain and are not changed.
Data may be individual files or all of the files in a folder. For more information, see Reading from Sources in ADLS above.
In the Import Data page, click the ADLS tab. See Import Data Page.
When your job results are generated, they can be stored back in ADLS for you at the location defined for your user account.
If your deployment is using ADLS, do not use the
Users can specify a default output home directory and, during job execution, an output directory for the current job.
Access to results:
Depending on how the platform is integrated with ADLS, other users may or may not be able to access your job results.
If user mode is enabled, results are written to ADLS through the ADLS account configured for your use. Depending on the permissions of your ADLS account, you may be the only person who can access these results.
As part of writing job results, you can choose to create a new dataset, so that you can chain together data wrangling tasks.
NOTE: When you create a new dataset as part of your job results, the file or files are written to the designated output location for your user account. Depending on how your HDI permissions are configured, this location may not be accessible to other users.