This guide steps through the process of acquiring and deploying a Docker image of the Designer Cloud Powered by Trifacta® platform in your Docker environment. Optionally, you can build the Docker image locally, which enables further configuration options.
- Designer Cloud Enterprise Edition deployed into a customer-managed environment: On-premises, AWS, or Azure.
- PostgreSQL 12.3 or MySQL 5.7 installed either:
- Remote server
- On-premises Hadoop:
- Connected to a supported Hadoop cluster.
- Kerberos integration is supported.
NOTE: For Docker installs and upgrades, only the dependencies for the latest supported version of each supported major Hadoop distribution are available for use after upgrade. For more information on the supported versions, please see the
hadoop-deps directory in the installer. Dependencies for versions other than those available on the installer are not supported.
- You cannot upgrade to a Docker image from a non-Docker deployment.
- You cannot switch an existing installation to a Docker image.
- Supported distributions of Cloudera or Hortonworks:
- The base storage layer of the platform must be S3 or ABFSS.
- High availability for the Designer Cloud Powered by Trifacta platform in Docker is not supported.
- SSO integration is not supported.
Support for orchestration through Docker Compose only
- Docker version 17.12 or later. Docker version must be compatible with the following version(s) of Docker Compose.
- Docker-Compose 1.24.1. Version must be compatible with your version of Docker.
Before you begin a Dockerized install, please verify that your enterprise infrastructure meets the following integration requirements.
NOTE: The Docker image contains all components and requirements for the Alteryx node for the appropriate infrastructure. In the following pages, you should verify connectivity, account permissions, cluster and datastore availability, and other aspects of connecting the Alteryx node to your infrastructure resources.
|On-premises Hadoop||Install On-Premises|
|AWS||Install for AWS|
|Azure||Install for Azure|
|CPU Cores||8 CPU||16 CPU|
|Available RAM||64 GB RAM||128 GB RAM|
Installation or upgrade of the product in a Dockerized environment requires installation of appropriate database client on the Alteryx node.
|PostgreSQL 12.3||The database client is included as part of the image and is automatically installed.|
The database client must be downloaded and installed by the customer. It is not available in the Docker image. The database client must be referenced through the Docker image file.
NOTE: Before you perform an upgrade of your deployment that connects to a MySQL database, please contact Alteryx Customer Success and Services.
Review the Browser Requirements in the Planning Guide.
NOTE: Designer Cloud Enterprise Edition requires the installation of a supported browser on each desktop.
Acquire your License Key.
You can acquire the latest Docker image using one of the following methods:
- Acquire from FTP site.
- Build your own Docker image.
Acquire from FTP site
- Download the following files from the FTP site:
x.y.zrefers to the version number (e.g.
tar xvf trifacta-docker-setup-bundle-x.y.z.tar
Files are extracted into a
dockerfolder. Key files:
File Description docker-compose-local-postgres.yaml Runtime configuration file for the Docker image when PostgreSQL is to be running on the same machine. More information is provided below. docker-compose-local-mysql.yaml Runtime configuration file for the Docker image when MySQL is to be running on the same machine. More information is provided below. docker-compose-remote-db.yaml
Runtime configuration file for the Docker image when the database is deployed on a remote server.
NOTE: You must manage this instance of the database.
More information is provided below.
Runtime configuration file for the Docker image when the Postgres database is deployed in AWS, and Designer Cloud Enterprise Edition is configured for S3 + EMR.
Runtime configuration file for the Docker image when the Postgres database is deployed in Azure, and Designer Cloud Enterprise Edition is configured for ADLS Gen2 + Databricks.
Instructions for running the Alteryx container on AWS
NOTE: These instructions are referenced later in this workflow.
Instructions for running the Alteryx container on Azure
NOTE: These instructions are referenced later in this workflow.
Instructions for building the Alteryx container
NOTE: This file does not apply if you are using the provided Docker image.
Load the Docker image into your local Docker environment:
docker load < trifacta-docker-image-x.y.z.tar
Confirm that the image has been loaded. Execute the following command, which should list the Docker image:
You can now configure the Docker image. Please skip that section.
Build your own Docker image
As needed, you can build your own Docker image.
Docker version 17.12 or later. Docker version must be compatible with the following version(s) of Docker Compose.
Docker Compose 1.24.1. It should be compatible with above version of Docker.
Acquire the RPM file from the FTP site:
NOTE: You must acquire the el7 RPM file for this release.
- In your Docker environment, copy the
trifacta-server\*.rpmfile to the same level as the
- Verify that the
docker-filesfolder and its contents are present.
Use the following command to build the image:
docker build -t trifacta/server-enterprise:latest .
This process could take about 10 minutes. When it is completed, you should see the build image in the Docker list of local images.
NOTE: To reduce the size of the Docker image, the Dockerfile installs the trifacta-server RPM file in one stage and then copies over the results to the final stage. The RPM is not actually installed in the final stage. All of the files are properly located.
- You can now configure the Docker image.
Configure Docker Image
Before you start the Docker container, you should review the properties for the Docker image. In the provided image, please open the appropriate
|docker-compose -local -postgres.yaml||Database properties in this file are pre-configured to work with the installed instance of PostgreSQL, although you may wish to change some of the properties for security reasons|
|docker-compose -local -mysql.yaml||Database properties in this file are pre-configured to work with the installed instance of MySQL, although you may wish to change some of the properties for security reasons|
|docker-compose -remote -db.yaml|
The Alteryx databases are to be installed on a remote server that you manage.
NOTE: Additional configuration is required.
|docker-compose-remote-db- postgres-s3 .yaml|
The Alteryx databases are to be installed on a remote Postgres server in AWS that you manage.
|docker-compose-remote-db- postgres-databricks-adls .yaml|
The Alteryx databases are to be installed on a remote Postgres server in Azure that you manage.
NOTE: You may want to create a backup of this file first.
Key general properties:
NOTE: Avoid modifying properties that are not listed below.
|image||This reference must match the name of the image that you have acquired.|
|container_name||Name of container in your Docker environment.|
Defines the listening port for the Designer Cloud application. Default is
NOTE: If you must change the listening port, additional configuration is required after the image is deployed. See Change Listening Port.
These properties pertain to the database installation to which the Designer Cloud application connects.
|DB_TYPE||Set this value to |
|DB_HOST_NAME||Hostname of the machine hosting the databases. Leave value as |
(Remote only) Port number to use to connect to the databases. Default is
NOTE: If you are modifying, additional configuration is required after installation is complete. See Change Database Port in the Databases Guide.
Admin username to be used to create DB roles/databases. Modify this value for remote installation.
NOTE: If you are modifying this value, additional configuration is required. Please see the documentation for your database version.
|DB_ADMIN_PASSWORD||Admin password to be used to create DB roles/databases. Modify this value for remote installation.|
|DB_AZURE_INSTANCE_NAME||Name of Azure Database for PostgreSQL Server. This setting is applicable only when the setup is on Azure with Databricks and ADLS Gen2.|
If your Hadoop cluster is protected by Kerberos, please review the following properties.
Full path inside of the container where the Kerberos keytab file is located. Default value:
NOTE: The keytab file must be imported and mounted to this location. Configuration details are provided later.
Full path inside of the container where the Kerberos
Hadoop distribution client JARs:
Please enable the appropriate path to the client JAR files for your Hadoop distribution. In the following example, the Cloudera path has been enabled, and the Hortonworks path has been disabled:
# Mount folder from outside for necessary hadoop client jars # For CDH - /opt/cloudera:/opt/cloudera # For HDP #- /usr/hdp:/usr/hdp
Please modify these lines if you are using Hortonworks.
These properties govern where volumes are mounted in the container.
NOTE: These values should not be modified unless necessary.
Full path in container to the Alteryx configuration directory. Default:
Full path in container to the Alteryx logs directory. Default:
Full path in container to the Alteryx license directory. Default:
After you have performed the above configuration, execute the following to initialize the Docker container directories:
docker-compose -f <docker-compose-filename>.yaml run --no-deps --rm trifacta initfiles
When the above is started for the first time, the following directories are created on the localhost:
Directory Description ./trifacta-data
Used by the Alteryx container to expose the
/trifacta-license Place the
license.jsonfile in this directory.
.sqlfile containing sql statements to create users and databases necessary for Alteryx services:
docker-compose -f <docker-compose-filename>.yaml run --no-deps --rm trifacta initdatabase
The following file is created on localhost:
Directory Description ./trifacta-data/db_setup/trifacta_<database_type>_DB_objects.sql
Used by the Alteryx container to expose the
- Create users and database:
docker-compose -f <docker-compose-filename>.yaml run postgresdb sh -c "PGPASSWORD=<DB_ADMIN_PASSWORD> psql --username=<DB_ADMIN_USERNAME> --host=<DB_HOST_NAME> --port=<DB_HOST_PORT> --dbname=postgres -f /opt/trifacta/db_setup/trifacta_<DB_TYPE>_DB_objects.sql"
docker-compose -f <docker-compose-filename>.yaml run mysqldb sh -c "mysql --host=<DB_HOST_NAME> --port=<DB_HOST_PORT> --user=<DB_ADMIN_USERNAME> --password=<DB_ADMIN_PASSWORD> --database=mysql < /opt/trifacta/db_setup/trifacta_mysql_<DB_TYPE>_objects.sql"
Run configuration and database migrations:
NOTE: During installation, the following command also creates required tables in the above databases.
docker-compose -f <docker-compose-filename>.yaml run --no-deps --rm trifacta run-migrations
Start Alteryx container:
docker-compose -f <docker-compose-filename>.yaml up -d trifacta
NOTE: If the Alteryx container is running but nothing is listening at port 3005, please confirm that you have started the container using the appropriate
Import Additional Configuration Files
After you have started the new container, additional configuration files must be imported.
Import license key file
The Alteryx license file must be staged for use by the platform. Stage the file in the following location in the container:
NOTE: If you are using a non-default path or filename, you must update the
Additional setup for Azure
For more information on setup on Azure using ADLS Gen2 storage, see ADLS Gen2 Access.
Additional setup for Hadoop on-premises
Import Hadoop distribution libraries
If the container you are creating is on the edge node of your Hadoop cluster, you must provide the Hadoop libraries.
- You must mount the Hadoop distribution libraries into the container. For more information on the libraries, see the documentation for your Hadoop distribution.
- The Docker Compose file must be made aware of these libraries. Details are below.
Import Hadoop cluster configuration files
Some core cluster configuration files from your Hadoop distribution must be provided to the container. These files must be copied into the following directory within the container:
For more information, see Configure for Hadoop in the Configuration Guide.
Install Kerberos client
If Kerberos is enabled, you must install the Kerberos client and keytab on the node container. Copy the keytab file to the following stage location:
See Configure for Kerberos Integration in the Configuration Guide.
Perform configuration changes as necessary
The primary configuration file for the platform is in the following location in the launched container:
NOTE: Unless you are comfortable working with this file, you should avoid direct edits to it. All subsequent configuration can be applied from within the application, which supports some forms of data validation. It is possible to corrupt the file using direct edits.
Configuration topics are covered later.
Start and Stop the Container
Stops the container but does not destroy it.
NOTE: Application and local database data is not destroyed. As long as the
.yaml properties point to the correct location of the
*-data files, data should be preserved. You can start new containers to use this data, too. Do not change ownership on these directories.
docker-compose -f <docker-compose-filename>.yaml stop
Restarts an existing container.
docker-compose -f <docker-compose-filename>.yaml start
Recreates a container using existing local data.
docker-compose -f <docker-compose-filename>.yaml up --force-recreate -d
Stop and destroy the container
Stops the container and destroys it.
The following also destroys all application configuration, logs, and database data. You may want to back up these directories first.
docker-compose -f <docker-compose-filename>.yaml down
sudo rm -rf trifacta-data/ postgres-data/
Local MySQL or remote database:
sudo rm -rf trifacta-data/
Verify access to the server where the Designer Cloud Powered by Trifacta platform is to be installed.
Cluster Configuration: Additional steps are required to integrate the Designer Cloud Powered by Trifacta platform with the cluster. See Prepare Hadoop for Integration with the Platform in the Planning Guide.
- Start the platform within the container. See Start and Stop the Platform.
After installation is complete, additional configuration is required. You can complete this configuration from within the application.
- Login to the application. See Login.
- The primary configuration interface is the Admin Settings page. From the left menu, select User menu > Admin console > Admin settings. For more information, see Admin Settings Page in the Admin Guide.
- In the Admin Settings page, you should do the following:
- Configure password criteria. See Configure Password Criteria.
- Change the Admin password. See Change Admin Password.
- Workspace-level configuration can also be applied. From the left menu, select User menu > Admin console > Workspace settings. For more information, see Workspace Settings Page in the Admin Guide.
The Designer Cloud Powered by Trifacta platform requires additional configuration for a successful integration with the datastore. Please review and complete the necessary configuration steps. For more information, see Configure in the Configuration Guide.
This page has no comments.