D toc |
---|
Excerpt | ||||
---|---|---|---|---|
The
|
Base Storage Layer
The base storage layer is the datastore where the
uploads data, generates profiles, results, and samples. By default, job results are written on the base storage layer. You can configure the base storage layer and other required settings. D s platform
Tip |
---|
Tip: The base storage layer must be a file-based system. |
Uses of base storage layer
In general, all base storage layers provide similar capabilities for storing, creating, reading, and writing datasets.
The base storage layer enables you to perform the following functions:
- Storing datasets: You can upload or store datasets in directories on the base storage layer. See below.
- Creating datasets: You can read in from datasources stored in the storage layer. A source may be a single file or a folder of files.
- Storage of samples: Any samples that you generate are stored in the base storage layer.
- Ingested data: Some data like Excel and PDF are stored as binary (non-text) files. These files must be read and converted to CSVs, which are stored on the base storage layer.
Cached data: You can enable a cache on the base storage layer, which allows data that has been ingested to remain on the base storage layer for a period of time. This cache allows for faster performance if you need to use the data at a later time.
- Writing Results: After you run the job, you can write the results to the storage layer.
Base storage layer directories
The following directories and their sub-directories are created and maintained on the base storage layer:
Directory | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
/trifacta/uploads | Storage of datasets uploaded through the
| ||||||||||
/trifacta/queryResults | Default storage of results generated job executions. Directories beneath this one are listed by the internal identifier for each user of the product who has run at least one job. For each user, these sub-directories are the default storage location for job results. These locations can be modified. See Preferences Page. | ||||||||||
/trifacta/dictionaries | Storage of custom dictionary files uploaded by users.
| ||||||||||
/trifacta/tempfiles | Temporary storage location for files required for use of the product.
|
User-specific directories
The following directories are created by default on the base storage layer for the
D s platform |
---|
By default, these directories are stored in the following:
Code Block |
---|
<bucket_name>/<userId> |
where:
<bucket_name>
is the name of the bucket where user data is stored.userId>
is the username that is used to log in to the product.
Directory | Description |
---|---|
jobrun | Storage of generated samples. |
temp | Temporary storage |
upload | Depending on your configuration, uploaded files may be stored in this per-user directory. |
These directories may be modified by individual users. For more information, see Storage Page.
Minimum Permissions
D s product |
---|
Directory | Owner Min Permissions | Group Min Permissions | World Min Permissions |
---|---|---|---|
/trifacta/uploads | read+write+execute | none | none |
/trifacta/queryResults | read+write+execute | none | none |
/trifacta/dictionaries | read+write+execute | none | none |
/trifacta/tempfiles | read+write+execute | none | none |
Available base storage layers
The
D s platform |
---|
Info |
---|
NOTE: In some deployments, the base storage layer is pre-configured for you and cannot be modified. After the base storage layer has been defined, you cannot change it. |
Info |
---|
NOTE: For all storage layers, the source data is untouched. Results are written to a location whenever a job is executed on a source dataset. |
TFS
TFS is a S3-backed data storage service for importing, storing, sampling, and generating results.
D s tfs |
---|
For more information, see Using TFS.
S3
Simple Storage Service (S3) is an online data storage service provided by Amazon, which provides low-latency access through web services. For more information, see https://aws.amazon.com/s3/ .
For more information, see External S3 Connections.
Encryption on base storage layer
For data that is transferred to and from the base storage layer:
- Data in transit is encrypted using HTTPS.
- Data at rest is unencrypted by default.
Info |
---|
NOTE: Server-side encryption can be applied when the product is writing results to an S3 bucket. For more information, see AWS Account Page. |
Management of base storage layer
Maintenance of the base storage layer must be in accordance with your enterprise policies.
Warning | |
---|---|
Unless the base storage layer is managed by |
Info | |
---|---|
NOTE: Except for temporary files, the |
External Storage
You can create connections to external storage systems. You can integrate the
with an external datastore. Depending on the type of connection and your permissions, the connection can be: D s platform
- read-only
- write-only
- read-write
You can create and edit connections between the
and external data stores. You can create either file-based or table-based connections to individual storage units, such as databases or buckets. D s platform
Info |
---|
NOTE: In your environment, creation of connections may be limited to administrators only. For more information, contact your workspace administrator. |
Tip |
---|
Tip: Administrators can edit any public connection. |
Info |
---|
NOTE: After you create a connection, you cannot change its connection type. You must delete the connection and start again. |
File-based systems
In addition to the base storage layer, you may be able to connect to other file-based systems. For example, if your base storage layer is HDFS, you can also connect to S3.
Info |
---|
NOTE: If HDFS is specified as your base storage layer, you cannot publish to Redshift. |
For more information, see Connection Types.
Cloud data warehouses
The
D s webapp | ||
---|---|---|
|
Snowflake
Through AWS infrastructure, the
D s webapp | ||
---|---|---|
|
Additional configuration may be required:
For more information, see Snowflake Connections.
Redshift
When your base storage layer is S3, you can create connections to your Redshift data warehouse.
If you are using IAM roles, the IAM role for each user must include permissions to access Redshift. For more information, see Required AWS Account Permissions.
For more information, see Amazon Redshift Connections.
Relational systems
When you are working with relational data, you can configure the database connections after you have completed the platform configuration and have validated that it is working for locally uploaded files.
Info | |
---|---|
NOTE: Database connections cannot be deleted if their databases host imported datasets that are in use by the |
For more information, see Using Databases.
Management of external storage
To integrate with an external system, the
D s webapp | ||
---|---|---|
|
- Basic ability to connect to the hosting node of the external system through your network or cloud-based infrastructure
- Requisite permissions to support the browsing, reading and/or writing of data to the storage system
- A defined connection between the application and the storage system.
Except for cleanup of temporary files, the
D s webapp | ||
---|---|---|
|
D s also | ||
---|---|---|
|