Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version next

D toc

Excerpt

D s product
supports different options for reading and writing data from your storage systems.

Base Storage Layer

The base storage layer is the datastore where

D s product
 uploads data, generates profiles, results, and samples. By default, job results are written on the base storage layer. You can configure the base storage layer and other required settings.

Tip

Tip: The base storage layer must be a file-based system.


Uses of base storage layer

In general, all base storage layers provide similar capabilities for storing, creating, reading, and writing datasets.

The base storage layer enables you to perform the following functions:

  1. Storing datasets: You can upload or store datasets in directories on the base storage layer. See below.
  2. Creating datasets: You can read in from datasources stored in the storage layer. A source may be a single file or a folder of files. 
  3. Storage of samples: Any samples that you generate are stored in the base storage layer.
  4. Ingested data: Some data like Excel and PDF are stored as binary (non-text) files. These files must be read and converted to CSVs, which are stored on the base storage layer.
  5. Cached data: You can enable a cache on the base storage layer, which allows data that has been ingested to remain on the base storage layer for a period of time. This cache allows for faster performance if you need to use the data at a later time. 

  6. Writing Results: After you run the job, you can write the results to the storage layer.

Base storage layer directories

D s product
 creates and maintains the following directories and their sub-directories on the base storage layer: 

DirectoryDescription
/trifacta/uploads

Storage of datasets uploaded through the

D s webapp
. Directories beneath this one are listed by the internal identifier for each user of the product who has uploaded at least one file.

Warning

Avoid using /trifacta/uploads for reading and writing data. This directory is used by the

D s webapp
.

/trifacta/queryResults

Default storage of results generated job executions. Directories beneath this one are listed by the internal identifier for each user of the product who has run at least one job.

For each user, these sub-directories are the default storage location for job results. These locations can be modified. See Preferences Page.

/trifacta/dictionaries

Storage of custom dictionary files uploaded by users.

Info

NOTE: This feature applies to

D s product
productee
only. It is not often used.

/trifacta/tempfiles

Temporary storage location for files required for use of the product.

Info

NOTE: The tempfiles directory is reserved for use by the platform. It is the only directory of these that is actively cleaned by the platform.


Minimum Permissions

D s product
 requires the following operating system level permissions on the listed directories and sub-directories: 

DirectoryOwner Min PermissionsGroup Min PermissionsWorld Min Permissions
/trifacta/uploadsread+write+executenonenone
/trifacta/queryResultsread+write+executenonenone
/trifacta/dictionariesread+write+executenonenone
/trifacta/tempfilesread+write+executenonenone

Available base storage layers

D s product
 supports the following base storage layers.

Info

NOTE: In some deployments, the base storage layer is pre-configured for you and cannot be modified. After the base storage layer has been defined, you cannot change it.


Info

NOTE: For all storage layers, the source data is untouched. Results are written to a location whenever a job is executed on a source dataset.



TFS

TFS is a S3-backed data storage service provided by 

D s company
 for importing, storing, sampling, and generating results. 
D s tfs
 is enabled as part of setting up your product. 

For more information, see Using TFS.


S3

Simple Storage Service (S3)  is an online data storage service provided by Amazon, which provides low-latency access through web services. For more information, see  https://aws.amazon.com/s3/

For more information, see External S3 Connections.




Management of base storage layer

Maintenance of the base storage layer must be in accordance with your enterprise policies. 

Warning

Unless the base storage layer is managed by

D s company
, it is the responsibility of the customer to maintain access and perform any required backups of data stored in the base storage layer.


Info

NOTE: Except for temporary files, the

D s platform
does not perform any cleanup of the base storage layer.

External Storage

You can create connections to external storage systems.  You can integrate

D s product
 with an external datastore. Depending on the type of connection and your permissions, the connection can be:

  • read-only
  • write-only
  • read-write

You can create and edit connections between 

D s product
rtrue
 and external data stores. You can create either file-based or table-based connections to individual storage units, such as databases or buckets.  

Info

NOTE: In your environment, creation of connections may be limited to administrators only. For more information, contact your workspace administrator.


Tip

Tip: Administrators can edit any public connection.


Info

NOTE: After you create a connection, you cannot change its connection type. You must delete the connection and start again.

For more information, see Connection Types.

File-based systems

In addition to the base storage layer, you may be able to connect to other file-based systems. For example, if your base storage layer is HDFS, you can also connect to S3.

Info

NOTE: If HDFS is specified as your base storage layer, you cannot publish to Redshift.

For more information, see Connection Types.

Cloud data warehouses

The

D s webapp
can be leveraged for loading and transforming data in data warehouses in the cloud. These integrations offer high performance access to reading in datasets from these and other sources, performing transformations, and writing results back to the data warehouse as needed.

Snowflake

Through AWS infrastructure, the

D s webapp
can integrate with your existing Snowflake data warehouse.

Additional configuration may be required:

For more information, see Snowflake Connections.

Redshift

When your base storage layer is S3, you can create connections to your Redshift data warehouse.

If you are using IAM roles, the IAM role for each user must include permissions to access Redshift. For more information, see Required AWS Account Permissions.

For more information, see Amazon Redshift Connections.


Relational systems

When you are working with relational data, you can configure the database connections after you have completed the platform configuration and have validated that it is working for locally uploaded files. 

Info

NOTE: Database connections cannot be deleted if their databases host imported datasets that are in use by 

D s product
. Remove these imported datasets before deleting the connection.

For more information, see Using Databases.

Management of external storage

To integrate with an external system, the 

D s webapp
 requires:

  • Basic ability to connect to the hosting node of the external system through your network or cloud-based infrastructure
  • Requisite permissions to support the browsing, reading and/or writing of data to the storage system
  • A defined connection between the application and the storage system.

Except for cleanup of temporary files, the 

D s webapp
 does not maintain external storage systems.