Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version r0821

D toc

In the Amazon AWS infrastructure, the 

D s platform
 can be deployed in a high availability failover mode across multiple modes This section describes the process for installing the platform across multiple, highly available nodes.


NOTE: This section applies to customer-managed deployments of the

D s platform
on AWS.


The following limitations apply to this feature:

  • This form of high availability is not supported for Marketplace installations.

  • During installation, the platform is configured to use the same account to access AWS resources. Per-user authentication must be set up afterward.


Before you begin, please verify that you have met the following requirements.

AWS infrastructure

  • AWS account
  • EKS cluster (see below)
  • S3 bucket:
    • S3 is required for the base storage layer.
    • A set of permissions must be enabled for the accounts or IAM roles used to access the bucket. For more information, see Enable S3 Access in the Configuration Guide.

  • EMR cluster. For more information, see Configure for EMR in the Configuration Guide.
  • Amazon RDS database:
    • The 
      D s item
       must be hosted on the same instance and port in Amazon RDS.
    • PostgreSQL 9.6 or 12.3
    • To ensure sufficient database connections, the instance size must be larger than m4.large.

    • The actual databases are installed as part of the installation process.
  • EFS mounts:

EKS cluster

  • Kubernetes version 1.15+
  • Subnets are available across multiple zones

NOTE: You should avoid using a default namespace. This namespace should be shared by other apps using your cluster.

Instance types:


Tip: Instance sizes should be larger than m4.2xlarge.

Disk space10 GB minimum10 GB minimum

NOTE: If you are publishing to S3, additional disk space should be reserved for a higher number of concurrent users or larger data volumes. For more information on fast upload to decrease disk requirements, see Enable S3 Access in the Configuration Guide.

For more information on installing and managing an EKS cluster, see


The following command line interfaces are referenced as part of this install process:

  • awscli
  • aws-iam-authenticator
  • kubectl
  • helm (version 3)
  • docker

D s item

The following assets are available from the 

D s item
itemFTP site

  1. D s item
    itemimage file
    This tar file contains the platform software to install.
  2. D s item
    itemhelm package
     This setup bundle jar includes:
    1. TGZ file
    2. Override template file for configuring initial values
  3. D s item
    itemlicense key file
     After you install the software, you must upload the license key file through the application. For more information, see License Key.

Install Steps

Configure Docker image

Please complete the following steps to download and configure the Docker image for use.


  1. Create an AWS Elastic Container Registry (ECR) repository to store 
    D s item
    . For more information, see
  2. Download the image file from the 

    D s item
    itemFTP site
    . Image filename should be in the following format:

    Code Block


    x.y.z maps to the Release number (Release 7.6.0).

  3. Load the image file into your ECR repository. For more information, see

  4. The image file has been loaded into the repository.

Configure AWS Kubernetes


  • AWS Kubernetes cluster is operational.
  • These steps use the AWS CLI and kubetcl to configure your Kubernetes deployment on AWS.


  1. Configure the AWS CLI to use the eks-admin user for your Kubernetes cluster. For more information, see
  2. Update the Kubernetes configuration (update-kubeconfig):

    Code Block
    aws eks update-kubeconfig --name <eks-cluster-name> --region <aws-region>

    <eks-cluster-name> is the name of the cluster to use for the cluster.
    <aws-region> is the region name where the cluster is located.


    Tip: Retain the EKS cluster name and region. These values may be used later during configuration.

  3. Switch to the namespace in the above cluster:


    NOTE: You should avoid using a default namespace. This namespace should be shared by other apps using your cluster.

    Code Block
    kubectl config set-context --current --namespace=<namespace>
  4. Verify that you are ready to use the namespace in the cluster:

    Code Block
    kubectl get pods
  5. The cluster is ready for use.

Configure DB credential secrets

For each of the

D s item
 that you have installed, you must set up database credential secrets. Please use the following pattern for configuring your database secrets.


NOTE: Except for db-credentials-admin, each of these secrets maps to a specific 

D s item
.db-credentials-admin is the username/password of the admin user of the RDS instance. The admin credentials are used to create and initialize all
D s item

Code Block
kubectl create secret generic db-credentials-webapp --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-scheduling-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-time-based-trigger-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-artifact-storage-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-authorization-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-configuration-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-job-metadata-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-secure-token-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-job-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-contract-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-orchestration-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-optimizer-service --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-batch-job-runner --from-literal=username=<db_username> --from-literal=password=<db_password>
kubectl create secret generic db-credentials-admin --from-literal=username=<db_username> --from-literal=password=<db_password>


  • <db_username> = username to access the specified database.
  • <db_password> = password corresponding to the specified database.

Configure the deployment


  1. Unpack the tar file obtained from the FTP site:

    Code Block
    untar trifacta-ha-setup-bundle-x.y.z.tar


    x.y.z maps to the Release number (Release 7.6.0).

  2. The package contains:
    1. A values override template file: values.override.template.yaml
    2. D s item
      itemhelm package
      tgz file
  3. Create a copy of the value overrides template file:

    Code Block
    cp values.override.template.yaml values.override.yaml
  4. Edit the values.override.yaml file. Instructions are below.

Edit values overrides

Example file:

Code Block
# Template for minimal configuration
# to get a High-availability deployment of Trifacta up and running
replicaCount: 2
  repository: "<PATH TO IMAGE_REPO>"
    # ARN To certificate in ACM
    certificateARN: arn:aws:acm:XXXX:certificate/XXXXXXX
    server: "<NFS SERVER HOST>"
    path: "/"
    server: "<NFS SERVER HOST>"
    path: "/"
  host: "<DATABASE HOST>"
  port: "5432"
  type: postgresql
  "aws.accountId" : "<AWS ACCT_ID>"
  "aws.credentialProvider": "<AWS CRED PROVIDER>"
  "aws.systemIAMRole": "arn:aws:iam::XXXX:role/XXXXXX"
  "": "<AWS S3 BUCKET NAME>"
  "aws.s3.key": "<AWS S3 KEY>"
  "aws.s3.secret": "<AWS S3 SECRET>"
# Enable a fluentd Statefulset to collect application logs.
  enabled: true
  # Specify values overrides for fluentd chart here
# Enable a fluent DaemonSet to collect node, K8s dataplane and cluster logs
  enabled: false
  # Specify values overrides for the fluentd-daemonset chart here
# Cluster details must be specified if fluentd logging is enabled
    name: "<CLUSTER NAME>" # EKS Cluster name
    region: "<CLUSTER REGION>" # EKS Cluster region

Tip: Paths to values are listed below in JSON notation (item.item.item).


Number of replica nodes of the

D s node
to maintain as failovers.

image.repositoryAWS path to the ECR image repository that you created.

Configure SSL

By default, SSL is enabled, and a certificate is required. 

SSL certificate requirements:

  • SSL security is served through the AWS LoadBalancer that serves the 
    D s platform
    • For more information on the supported SSL configurations, see the values.yaml file provided in the 
      D s item
      itemhelm package
  • The SSL certificate must be issued for the FQDN of the 
    D s platform
loadBalancer.ssl.certificateARNThe ARN for the SSL certificate in the AWS Certificate Manager.

The certificate ARN value references the ARN stored in the AWS Certificate Manager, or you can import your own certificate into ACM. For more information, see

To disable:

To disable SSL, please apply the following configuration changes:

Code Block
        enabled: false

EFS Mount points

The following values are used to define the locations of the mount points for storing configuration and log data. 


NOTE: You should have reserved at least 10 GB for each mount point.



Host of the NFS server for the configuration mount point
nfs.conf.pathOn the conf server, the path to the storage area. Default is the root location.
nfs.logs.serverHost of the NFS server for the logging mount point
nfs.logs.pathOn the conf server, the path to the storage area. Default is the root location.



Host of the Amazon RDS databases



D s item
must be hosted on the same RDS instance and available through the same port.


Port number through which to access the RDS databases. The default value is 5432.

database.typeThe type of database. Please leave this value as postgresql.

D s triconf

Below you can specify values that are applied to 

D s triconf
, which is the platform configuration file. For more information on these settings, see  Configure for AWS in the Configuration Guide.


The AWS account identifier to use when connecting to AWS resources.

The type of credential provider to use for individuals authenticating to AWS resources.


NOTE: During installation, the platform is configured to use the same account to access AWS resources. Per-user authentication must be set up afterward.

Supported values:

  • default - credentials are submitted as an AWS key/secret combination.
  • temporary - credentials are submitted using the same IAM role for all users.


    Tip: Using a temporary credential provider is recommended.

Details are below. the credential provider is set to temporary, this value defines the system-wide IAM role to use to access AWS. the credential is set to default, this value defines the AWS key to use for authentication. the credential is set to default, this value defines the AWS secret for the AWS key.

The default S3 bucket to use.


NOTE: The AWS account must have read/write access to this bucket.

After the platform is operational, you can apply additional configuration changes to this file through the command line or through the application. For more information, see  Platform Configuration Methods in the Configuration Guide.

Configure fluentd

When enabled, a separate set of fluentd pods is launched to collect and forward 

D s item


When set to true, a fluentd Statefulset is deployed to collect application logs.

You can specify value overrides to fluentd chart in the following manner:

Code Block
		repository: fluent/fluentd-kubernetes-daemonset
		tag: "v1.10.4-debian-cloudwatch-1.0"

See charts/fluentd/values.yaml in the helm package for supported values.


When set to true, a fluentd DaemonSet is deployed to collect node, Kubernetes, dataplane, and cluster logs.

If either of the above fluentd logging options is enabled, the following must be specified:


This value is the name of the EKS cluster that you created.

global.cluster.regionThis value is the name of the region where the EKS cluster was created.

Configure fluentd

Optionally, you can enable fluentd to collect application logs.

Log destinations:

The logs source for fluentd logs is the 

D s item
itemlog directory

The log destination must be configured. For more information on the fluentd output plugins, see

  1. Create a logdestination.conf configuration file containing a ConfigMap for your log destination:

    Code Block
    kubectl create configmap fluentd-log-destination --from-file logdestination.conf

  2.  The logdestination.conf file must be in a fluentd configuration. Below, you can see an example logdestination.conf file, which pushes 

    D s item
     to AWS Cloudwatch:

    Code Block
        <label @NORMAL>
          <match app.*>
            @type cloudwatch_logs
            @id out_cloudwatch_logs_application
            region "#{ENV.fetch('REGION')}"
            log_group_name "/aws/containerinsights/#{ENV.fetch('CLUSTER_NAME')}/application"
            log_stream_name_key stream_name
            auto_create_stream true
            json_handler yajl
              flush_interval 5
              chunk_limit_size 2m
              queued_chunks_limit_size 32
              retry_forever true

    For more information on fluentd configuration file syntax, see

  3. When configured, the logdestination.conf file is added as an add-on to the prepackaged fluentd configuration for the 
    D s platform


Install software

After you have configured the values override file, you can use the following command to install the deployment using helm:

Code Block
helm install trifacta <trifacta-helm-package-tgz-file> --namespace <namespace> --values <path-to-values-override-file>


  • trifacta-helm-package-tgz-file = the name of the Helm package that you downloaded from the
    D s item
    itemFTP site
  • namespace= the AWS Kubernetes namespace value.
  • path-to-values-override-file = the path in your local environment to the values override file.

Acquire service URL

Use the following command to retrieve the service URL.


NOTE: The service URL is used to access the

D s webapp
, where you complete the configuration process. Users create
D s item
through the
D s webapp

Code Block
kubectl get svc trifacta -o json | jq -r '.status.loadBalancer.ingress[0].hostname'

Verify access to the application

Copy and paste the service URL into a supported version of a supported web browser. For more information on supported web browsers, see Desktop Requirements in the Planning Guide.


Tip: You can map CNAME/ALIAS against this service URL through Route53 configurations. For more information, see

The login screen for the 

D s webapp
 should be displayed. Login to the application using the admin credentials.


You should change the administrator password as soon as you log in. For more information, see Change Admin Password in the Admin Guide.

For more information, see Login.

Other Commands

Scale platform

Scale the number of 

D s platform
 pods through kubectl:

Code Block
kubectl scale statefulset trifacta --replicas=<Desired number of pods>

Restart platform

Restart the 

D s platform
 through kubetcl:

Code Block
kubectl rollout restart statefulset trifacta

Delete pods

Use the following to delete 

D s item

Code Block
kubectl delete statefulset trifacta



By default, Amazon RDS performs periodic backups of your installed databases.

For more information on manual backup of the databases, see Backup and Recovery in the Admin Guide.

EFS mounts

For more information on backing up your EFS mounts through AWS, see


Set S3 as base storage layer

You must configure access to S3.


NOTE: If you are publishing to S3, 50 GB or more is recommended for storage per node. Additional disk space should be reserved for a higher number of concurrent users or larger data volumes. You can also enable fast upload to decrease disk requirements.

For more information, see Enable S3 Access in the Configuration Guide.

S3 must be set as the base storage layer. For more information, see Set Base Storage Layer in the Configuration Guide.

Upload the license file

When the platform is first installed, a temporary license is provided. This license key must be replaced by the license key that was provided to you. For more information, see License Key in the Admin Guide.

Configure for EMR

Additional configuration is required to enable the 

D s platform
 to run jobs on the EMR cluster. For more information, see Configure for EMR in the Configuration Guide.