This documentation applies to installation from a supported Marketplace. Please use the installation instructions provided with your deployment.
If you are installing or upgrading a Marketplace deployment, please use the available PDF content. You must use the install and configuration PDF available through the Marketplace listing.
The Trifacta® platform can be hosted within Amazon and supports integrations with multiple services from Amazon Web Services, including combinations of services for hybrid deployments. This section provides an overview of the integration options, as well as links to related configuration topics.
For an overview of AWS deployment scenarios, see Supported Deployment Scenarios for AWS.
From AWS, the Trifacta platform requires Internet access for the following services:
NOTE: Depending on your AWS deployment, some of these services may not be required.
- AWS S3
- Key Management System [KMS] (if sse-kms server side encryption is enabled)
- Secure Token Service [STS] (if temporary credential provider is used)
- EMR (if integration with EMR cluster is enabled)
NOTE: If the Trifacta platform is hosted in a VPC where Internet access is restricted, access to S3, KMS and STS services must be provided by creating a VPC endpoint. If the platform is accessing an EMR cluster, a proxy server can be configured to provide access to the AWS ElasticMapReduce regional endpoint.
The following database scenarios are supported.
By default, the Trifacta databases are installed on PostgreSQL instances in the Trifacta node or another accessible node in the enterprise environment. For more information, see Install Databases.
For Amazon-based installations, you can install the Trifacta databases on PostgreSQL instances on Amazon RDS. For more information, see Install Databases on Amazon RDS.
Base AWS Configuration
The following configuration topics apply to AWS in general.
Base Storage Layer
NOTE: The base storage layer must be set during initial configuration and cannot be modified after it is set.
S3: Most of these integrations require use of S3 as the base storage layer, which means that data uploads, default location of writing results, and sample generation all occur on S3. When base storage layer is set to S3, the Trifacta platform can:
- read and write to S3
- read and write to Redshift
- connect to an EMR cluster
HDFS: In on-premises installations, it is possible to use S3 as a read-only option for a Hadoop-based cluster when the base storage layer is HDFS. You can configure the platform to read from and write to S3 buckets during job execution and sampling. For more information, see Enable S3 Access.
For more information on setting the base storage layer, see Set Base Storage Layer.
For more information, see Storage Deployment Options.
Configure AWS Region
For Amazon integrations, you can configure the Trifacta node to connect to Amazon datastores located in different regions.
NOTE: This configuration is required under any of the following deployment conditions:
- The Trifacta node is installed on-premises, and you are integrating with Amazon resources.
- The EC2 instance hosting the Trifacta node is located in a different AWS region than your Amazon datastores.
- The Trifacta node or the EC2 instance does not have access to s3.amazonaws.com.
- In the AWS console, please identify the location of your datastores in other regions. For more information, see the Amazon documentation.
- Login to the Trifacta application.
- You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json. For more information, see Platform Configuration Methods.
Set the value of the following property to the region where your S3 datastores are located:
If the above value is not set, then the Trifacta platform attempts to infer the region based on default S3 bucket location.
Save your changes.
The following table illustrates the various methods of managing authentication between the platform and AWS. The matrix of options is basically determined by the settings for two key parameters.
- credential provider - source of credentials: platform (default), instance (EC2 instance only), or temporary
- AWS mode - the method of authentication from platform to AWS: system-wide or by-user
|Default||One system-wide key/secret combo is inserted in the platform for use||Each user provides key/secret combo.|
|Instance||Platform uses EC2 instance roles.||Users provide EC2 instance roles.|
Temporary credentials are issued based on per-user IAM roles.
|Per-user authentication when using IAM role.|
AWS Auth Mode
When connecting to AWS, the platform supports the following basic authentication modes:
Access to AWS resources is managed through a single, system account. The account that you specify is based on the credential provider selected below.
Authentication must be specified for individual users.
Tip: In AWS user mode, Trifacta administrators can manage S3 access for users through the Admin Settings page. See Manage Users.
AWS Credential Provider
The Trifacta platform supports the following methods of providing credentialed access to AWS and S3 resources.
|default||This method uses the provided AWS Key and Secret values to access resources. See below.|
When you are running the Trifacta platform on an EC2 instance, you can leverage your enterprise IAM roles to manage permissions on the instance for the Trifacta platform. See below.
|temporary||Details are below.|
Default credential provider
Whether the AWS access mode is set to system or user, the default credential provider for AWS and S3 resources is the Trifacta platform.
A single AWS Key and Secret is inserted into platform configuration. This account is used to access all resources and must have the appropriate permissions to do so.
|Each user must specify an AWS Key and Secret into the account to access resources.||For more information on configuring individual user accounts, see Configure Your Access to S3.|
Default credential provider with EMR:
If you are using this method and integrating with an EMR cluster:
- Copying the custom credential JAR file must be added as a bootstrap action to the EMR cluster definition. See Configure for EMR.
As an alternative to copying the JAR file, you can use the EMR EC2 instance-based roles to govern access. In this case, you must set the following parameter:
For more information, see Configure for EC2 Role-Based Authentication.
Instance credential provider
When the platform is running on an EC2 instance, you can manage permissions through pre-defined IAM roles.
NOTE: If the Trifacta platform is connected to an EMR cluster, you can force authentication to the EMR cluster to use the specified IAM instance role. See Configure for EMR.
For more information, see Configure for EC2 Role-Based Authentication.
Temporary credential provider
For even better security, you can enable use temporary credentials provided from your AWS resources based on an IAM role specified per user.
Tip: This method is recommended by AWS.
Set the following properties.
Individual users can be configured to provide temporary credentials for access to AWS resources, which is a more secure authentication solution. For more information, see Configure AWS Per-User Authentication.
To integrate with S3, additional configuration is required. See Enable S3 Access.
You can create connections to one or more Redshift databases, from which you can read database sources and to which you can write job results. Samples are still generated on S3.
NOTE: Relational connections require installation of an encryption key file on the Trifacta node. For more information, see Create Encryption Key File.
For more information, see Create Redshift Connections.
Trifacta Wrangler Enterprise can integrate with one instance of either of the following.
NOTE: If Trifacta Wrangler Enterprise is installed through the Amazon Marketplace, only the EMR integration is supported.
When Trifacta Wrangler Enterprise in installed through AWS, you can integrate with an EMR cluster for Spark-based job execution. For more information, see Configure for EMR.
If you have installed Trifacta Wrangler Enterprise on-premises or directly into an EC2 instance, you can integrate with a Hadoop cluster for Spark-based job execution. See Configure for Hadoop.
This page has no comments.