Skip to main content

Enable Access to S3 and AWS Resources

If you plan to use S3 as the default storage environment, the following sections outline the AWS configuration prerequisites and requirements.

Tip

This section should be shared with your S3 administrator, who can provide the required information.

AWS Overview

Below are the AWS objects that are required for S3 setup.

AWS object

Required?

Description

AWS account

Y

To create these objects are part of the setup process, you must have an AWS account. For more information, see https://aws.amazon.com/.

Valid email address

Y

To validate your registration for a new workspace, you must have a valid email address to which the product can deliver the registration email.

Choice: cross-account role access or key-secret access

Y

To integrate with your existing S3 resources, you must choose a method of authentication. Choices:

  • cross-account role: This method uses IAM roles to define the permissions used by the product for S3 access.

    Tip

    This method is recommended.

  • key-secret access: This method uses an IAM access keys to provide S3 access.

IAM policy

Y

An IAM (Identity and Access Management) policy is an AWS resource used to define the low-level permissions for access to a specific resource. You can use an IAM policy for the product to use for either access method.

For more information, see "Create policy to grant access to S3 bucket" below.

cross-account role access: IAM role

Y

An IAM role contains one or more IAM policies that can be used to define the set of available AWS services and the level of access to them for a user. In this case, the user is the Trifacta Application.

key-secret access: AWS key-secret

Y

An older AWS access method, the key-secret combination is essentially a username and password combination to one or more S3 buckets.

S3 bucket

Y

S3 (Simplified Storage Service) is a cloud-based file storage system hosted in AWS. An S3 bucket contains your data files and their organizing folders.

S3 bucket: encryption

N

For better security, your S3 bucket may be encrypted, which means that the data is stored inside of S3 in a way that is not human-readable.

Note

The product can optionally integrate with encrypted S3 buckets. The following S3 encryption methods are supported: sse-s3 and sse-kms.

Note

If None is selected here, AWS S3 still applies server-side encryption to the bucket without impact to cost or performance. For more information, see https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html.

Note

If your bucket is encrypted with ss3-kms, additional configuration is required. See "Update policy to accommodate SSE-KMS if necessary" below.

For more information on your bucket's encryption, please contact your S3 administrator.

S3 bucket: storage location

N

If needed, you can change the location where results are stored in S3.

Note

The product must have write permission to this location. If you are changing the location from the default, please verify with your S3 administrator that the preferred location is enabled for writing through your access method.

IAM role: Account ID

N

The account ID identifies in the trust policy that Alteryx AWS account can use your IAM role.

Tip

This identifier is provided to you during registration and setup.

IAM role: External ID

N

The external ID identifies in the trust policy that Trifacta Application can use your IAM role only on your behalf.

Tip

This identifier is provided to you during registration and setup.

Technical Setup

The following sections should be provided to your AWS administrator for setting up access to these resources, if required.

Create policy to grant access to S3 bucket

To use your own S3 bucket(s) with Trifacta Application, create a policy and assign it to either the user or IAM Role selected to grant access to AWS resources. In this section, you create the policy. Later, it will be applied.

Below is an example policy template. You should use this template to create the policy.

Note

You should not simply use one of the predefined AWS policies or an existing policy you have as it will likely give access to more resources than required.

Template Notes:

  1. One of the statements grants access to the public demo asset buckets.

  2. Replace <my_default_S3_bucket> with the name of your default S3 bucket.

  3. To grant access to multiple buckets within your account, you can extend the resources list to accommodate the additional buckets.

Policy Template
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::<my_default_S3_bucket>",
                "arn:aws:s3:::<my_default_S3_bucket>/*"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::aws-saas-samples-prod",
                "arn:aws:s3:::aws-saas-samples-prod/*",
                "arn:aws:s3:::aws-saas-datasets",
                "arn:aws:s3:::aws-saas-datasets/*",
                "arn:aws:s3:::3fac-data-public",
                "arn:aws:s3:::3fac-data-public/*"
                "arn:aws:s3:::trifacta-public-datasets",
                "arn:aws:s3:::trifacta-public-datasets/*"
            ]
        }
    ]
}
Update policy to accommodate SSE-KMS if necessary

If any accessible bucket is encrypted with SSE-KMS, another policy must be deployed. See https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html.

Add policy for Redshift access

If you are connecting to Redshift databases through your workspace, you can enable access by creating a GetClusterCredentials policy. This policy is additive to the the S3 access policies. All of these policies can be captured in a single IAM role.

Example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "GetClusterCredsStatement",
      "Effect": "Allow",
      "Action": [
        "redshift:GetClusterCredentials"
      ],
      "Resource": [
        "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecluster/${redshift:DbUser}",
        "arn:aws:redshift:us-west-2:123456789012:dbname:examplecluster/testdb",
        "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecluster/common_group"
      ],
      "Condition": {
        "StringEquals": {
          "aws:userid": "AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@yourdomain.com"
        }
      }
    }
  ]
}

For more information on these permissions, see Required AWS Account Permissions.

Whitelist the IP address range of the Alteryx Service, if necessary

If you are enabling any relational source, including Redshift, you must whitelist the IP address range of the Alteryx service in the relevant security groups.

Note

The database to which you are connecting must be available from the Alteryx service over the public Internet.

The IP address range of the Alteryx service is:

35.245.35.240/28

For Redshift:

For Redshift, there are two ways to whitelist the IP range depending on if you are using EC2-VPC or EC2-Classic (not common).

For details on this process with RDS in general, see https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html

For more information, please contact Alteryx Support.