Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

D toc

D s install marketplace

The 

D s platform
rtrue
 can be hosted within Amazon and supports integrations with multiple services from Amazon Web Services, including combinations of services for hybrid deployments. This section provides an overview of the integration options, as well as links to related configuration topics.

For an overview of AWS deployment scenarios, see Supported Deployment Scenarios for AWS.

Internet Access


Excerpt

From AWS, the 

D s platform
 requires Internet access for the following services:

Info

NOTE: Depending on your AWS deployment, some of these services may not be required.

 

  • AWS S3
  • Key Management System [KMS] (if sse-kms server side encryption is enabled)
  • Secure Token Service [STS] (if temporary credential provider is used)
  • EMR (if integration with EMR cluster is enabled)
Info

NOTE: If the

D s platform
is hosted in a VPC where Internet access is restricted, access to

these

S3, KMS and STS services must be provided by

endpoints on the VPC

creating a VPC endpoint. If the platform is accessing an EMR cluster, a proxy server can be configured to provide access to the AWS ElasticMapReduce regional endpoint.



Database Installation

The following database scenarios are supported.

Database HostDescription
Cluster node

By default, the

D s item
itemdatabases
are installed on PostgreSQL instances in the
D s item
itemnode
or another accessible node in the enterprise environment. For more information, see Set up the Databases.

Amazon RDS

For Amazon-based installations, you can install the

D s item
itemdatabases
are installed on PostgreSQL instances on Amazon RDS. For more information, see Install Databases on Amazon RDS.

Base AWS Configuration

The following configuration topics apply to AWS in general.

D s config

Base Storage Layer

Info

NOTE: The base storage layer must be set during initial configuration and cannot be modified after it is set.

S3: Most of these integrations require use of S3 as the base storage layer, which means that data uploads, default location of writing results, and sample generation all occur on S3. When base storage layer is set to S3, the

D s platform
 can:

  • read and write to S3
  • read and write to Redshift
  • connect to an EMR cluster

HDFS: In on-premises installations, it is possible to use S3 as a read-only option for a Hadoop-based cluster when the base storage layer is HDFS. You can configure the platform to read from and write to S3 buckets during job execution and sampling. For more information, see Enable S3 Access.

For more information on setting the base storage layer, see Set Base Storage Layer.

For more information, see Storage Deployment Options.

Configure AWS Region

For Amazon integrations, you can configure the 

D s item
itemnode
 to connect to Amazon datastores located in different regions. 

Info

NOTE: This configuration is required under any of the following deployment conditions:

  1. The
    D s item
    itemnode
    is installed on-premises, and you are integrating with Amazon resources.
  2. The EC2 instance hosting the
    D s item
    itemnode
    is located in a different AWS region than your Amazon datastores.
  3. The
    D s item
    itemnode
    or the EC2 instance does not have access to s3.amazonaws.com.

 

  1. In the AWS console, please identify the location of your datastores in other regions. For more information, see the Amazon documentation.
  2. In the 

    D s item
    itemnode
    , please edit the following file:

    Code Block
    /opt/trifacta/conf/env.sh


  3. Insert the following environment variables:

    Code Block
    export AWS_DEFAULT_REGION="<regionValue>"
    export AWS_REGION="<regionValue>"

    where:
    <regionValue> corresponds to the AWS region identifier (e.g. us-east-1).

  4. Save the file.

AWS Mode

When connecting to AWS, the platform supports the following authentication methods:

ModeConfigurationDescription
system


Code Block
"aws.mode": "system",


Access to AWS resources is managed through a single account. This account is specified based on the credential provider method.

  • The instance credential provider method ignores this setting.

See below.

user


Code Block
"aws.mode": "user",


AWS key and secret must be specified for individual users.

Info

NOTE: Creation and use of custom dictionaries is not supported in user mode.


Info

NOTE: The credential provider must be set to default. See below.


Credential Provider

The 

D s platform
 supports the following methods of providing credentialed access to AWS and S3 resources.

TypeConfigurationDescription
default 


Code Block
"aws.credentialProvider":"default",


This method uses the provided AWS Key and Secret values to access resources.
instance 


Code Block
"aws.credentialProvider":"instance",


When you are running the 

D s platform
 on an EC2 instance, you can leverage your enterprise IAM roles to manage permissions on the instance for the 
D s platform

Default credential provider

Whether the AWS access mode is set to system or user, the default credential provider for AWS and S3 resources is the 

D s platform

ModeDescriptionConfiguration


Code Block
"aws.mode": "system",


A single AWS Key and Secret is inserted into platform configuration. This account is used to access all resources and must have the appropriate permissions to do so.

 


Code Block
"aws.s3.key": "<your_key_value>",
"aws.s3.secret": "<your_key_value>",



Code Block
"aws.mode": "user",


Each user must specify an AWS Key and Secret into the account user profile to access resources.For more information on configuring individual user accounts, see Configure Your Access to S3.

If you are using this method and integrating with an EMR cluster: 

  • Copying the custom credential JAR file must be added as a bootstrap action to the EMR cluster definition. See Configure for EMR.
  • As an alternative to copying the JAR file, you can use the EMR EC2 instance-based roles to govern access. In this case, you must set the following parameter:

    Code Block
    "aws.emr.forceInstanceRole": true,

     For more information, see Configure for EC2 Role-Based Authentication.

Instance credential provider

When the platform is running on an EC2 instance, you can manage permissions through pre-defined IAM roles. 

Info

NOTE: If the

D s platform
is connected to an EMR cluster, you can force authentication to the EMR cluster to use the specified IAM instance role. See Configure for EMR.

For more information, see Configure for EC2 Role-Based Authentication.

AWS Storage

S3 Sources

To integrate with S3, additional configuration is required. See Enable S3 Sources.

Redshift Connections

You can create connections to one or more Redshift databases, from which you can read database sources and to which you can write job results. Samples are still generated on S3.

Info

NOTE: Relational connections require installation of an encryption key file on the

D s item
itemnode
. For more information, see Create Encryption Key File.

For more information, see Create Redshift Connections.

AWS Clusters

D s product
productee
 can integrate with one instance of either of the following. 

Info

NOTE: If

D s product
productee
is installed through the Amazon Marketplace, only the EMR integration is supported

EMR

When 

D s product
productee
 in installed through AWS, you can integrate with an EMR cluster for Spark-based job execution. For more information, see Configure for EMR.

Hadoop

If you have installed 

D s product
productee
 on-premises or directly into an EC2 instance, you can integrate with a Hadoop cluster for Spark-based job execution. See Configure for Hadoop.