The
can be hosted within Amazon and supports integrations with multiple services from Amazon Web Services, including combinations of services for hybrid deployments. This section provides an overview of the integration options, as well as links to related configuration topics.
For an overview of AWS deployment scenarios, see Supported Deployment Scenarios for AWS.
Excerpt |
---|
From AWS, the requires Internet access for the following services: Info |
---|
NOTE: Depending on your AWS deployment, some of these services may not be required. |
- AWS S3
- Key Management System [KMS] (if sse-kms server side encryption is enabled)
- Secure Token Service [STS] (if temporary credential provider is used)
- EMR (if integration with EMR cluster is enabled)
Info |
---|
NOTE: If the is hosted in a VPC where Internet access is restricted, access to S3, KMS and STS services must be provided by creating a VPC endpoint. If the platform is accessing an EMR cluster, a proxy server can be configured to provide access to the AWS ElasticMapReduce regional endpoint. |
|
The following database scenarios are supported.
Database Host | Description |
---|
Cluster node | By default, the are installed on PostgreSQL instances in the or another accessible node in the enterprise environment. For more information, see Set up the Databases. |
Amazon RDS | For Amazon-based installations, you can install the are installed on PostgreSQL instances on Amazon RDS. For more information, see Install Databases on Amazon RDS. |
The following configuration topics apply to AWS in general.
Info |
---|
NOTE: The base storage layer must be set during initial configuration and cannot be modified after it is set. |
S3: Most of these integrations require use of S3 as the base storage layer, which means that data uploads, default location of writing results, and sample generation all occur on S3. When base storage layer is set to S3, the
can:
- read and write to S3
- read and write to Redshift
- connect to an EMR cluster
HDFS: In on-premises installations, it is possible to use S3 as a read-only option for a Hadoop-based cluster when the base storage layer is HDFS. You can configure the platform to read from and write to S3 buckets during job execution and sampling. For more information, see Enable S3 Access.
For more information on setting the base storage layer, see Set Base Storage Layer.
For more information, see Storage Deployment Options.
For Amazon integrations, you can configure the
to connect to Amazon datastores located in different regions.
Info |
---|
NOTE: This configuration is required under any of the following deployment conditions: - The is installed on-premises, and you are integrating with Amazon resources.
- The EC2 instance hosting the is located in a different AWS region than your Amazon datastores.
- The or the EC2 instance does not have access to s3.amazonaws.com.
|
- In the AWS console, please identify the location of your datastores in other regions. For more information, see the Amazon documentation.
In the
, please edit the following file: Code Block |
---|
/opt/trifacta/conf/env.sh |
Insert the following environment variables:
Code Block |
---|
export AWS_DEFAULT_REGION="<regionValue>"
export AWS_REGION="<regionValue>" |
where:
<regionValue>
corresponds to the AWS region identifier (e.g. us-east-1
).
- Save the file.
When connecting to AWS, the platform supports the following authentication methods:
Mode | Configuration | Description |
---|
system |
Code Block |
---|
"aws.mode": "system", |
| Access to AWS resources is managed through a single account. This account is specified based on the credential provider method. - The instance credential provider method ignores this setting.
See below. |
user |
Code Block |
---|
"aws.mode": "user", |
| AWS key and secret must be specified for individual users. Info |
---|
NOTE: Creation and use of custom dictionaries is not supported in user mode. |
Info |
---|
NOTE: The credential provider must be set to default . See below. |
|
The
supports the following methods of providing credentialed access to AWS and S3 resources.
Type | Configuration | Description |
---|
default |
Code Block |
---|
"aws.credentialProvider":"default", |
| This method uses the provided AWS Key and Secret values to access resources. |
instance |
Code Block |
---|
"aws.credentialProvider":"instance", |
| When you are running the on an EC2 instance, you can leverage your enterprise IAM roles to manage permissions on the instance for the . |
Whether the AWS access mode is set to system or user, the default credential provider for AWS and S3 resources is the .
Mode | Description | Configuration |
---|
Code Block |
---|
"aws.mode": "system", |
| A single AWS Key and Secret is inserted into platform configuration. This account is used to access all resources and must have the appropriate permissions to do so. |
Code Block |
---|
"aws.s3.key": "<your_key_value>",
"aws.s3.secret": "<your_key_value>", |
|
Code Block |
---|
"aws.mode": "user", |
| Each user must specify an AWS Key and Secret into the account user profile to access resources. | For more information on configuring individual user accounts, see Configure Your Access to S3. |
If you are using this method and integrating with an EMR cluster:
- Copying the custom credential JAR file must be added as a bootstrap action to the EMR cluster definition. See Configure for EMR.
As an alternative to copying the JAR file, you can use the EMR EC2 instance-based roles to govern access. In this case, you must set the following parameter:
Code Block |
---|
"aws.emr.forceInstanceRole": true, |
For more information, see Configure for EC2 Role-Based Authentication.
When the platform is running on an EC2 instance, you can manage permissions through pre-defined IAM roles.
Info |
---|
NOTE: If the is connected to an EMR cluster, you can force authentication to the EMR cluster to use the specified IAM instance role. See Configure for EMR. |
For more information, see Configure for EC2 Role-Based Authentication.
To integrate with S3, additional configuration is required. See Enable S3 Sources.
You can create connections to one or more Redshift databases, from which you can read database sources and to which you can write job results. Samples are still generated on S3.
Info |
---|
NOTE: Relational connections require installation of an encryption key file on the . For more information, see Create Encryption Key File. |
For more information, see Create Redshift Connections.
can integrate with one instance of either of the following.
Info |
---|
NOTE: If is installed through the Amazon Marketplace, only the EMR integration is supported |
When
in installed through AWS, you can integrate with an EMR cluster for Spark-based job execution. For more information, see
Configure for EMR.
If you have installed
on-premises or directly into an EC2 instance, you can integrate with a Hadoop cluster for Spark-based job execution. See
Configure for Hadoop.