This guide steps through the process of acquiring and deploying a Docker image of the Designer Cloud Powered by Trifacta® platform in your Docker environment. Optionally, you can build the Docker image locally, which enables further configuration options.
- Designer Cloud Enterprise Edition deployed into a customer-managed environment: On-premises, AWS, or Azure.
- PostgreSQL 9.6 or MySQL 5.7 installed either:
- Remote server
- Connected to a supported Hadoop cluster.
- Kerberos integration is supported.
NOTE: For Docker installs and upgrades, only the dependencies for the latest supported version of each supported major Hadoop distribution are available for use after upgrade. For more information on the supported versions, please see the
hadoop-deps directory in the installer. Dependencies for versions other than those available on the installer are not supported.
- You cannot upgrade to a Docker image from a non-Docker deployment.
- You cannot switch an existing installation to a Docker image.
- Supported distributions of Cloudera or Hortonworks:
- The base storage layer of the platform must be HDFS. Base storage of S3 is not supported.
- High availability for the Designer Cloud Powered by Trifacta platform in Docker is not supported.
- SSO integration is not supported.
Support for orchestration through Docker Compose only
- Docker version 17.12 or later. Docker version must be compatible with the following version(s) of Docker Compose.
- Docker-Compose 1.24.1. Version must be compatible with your version of Docker.
|CPU Cores||2 CPU||4 CPU|
|Available RAM||8 GB RAM||10+ GB RAM|
Review the Desktop Requirements in the Planning Guide.
NOTE: Designer Cloud Enterprise Edition requires the installation of a supported browser on each desktop.
Acquire your License Key.
You can acquire the latest Docker image using one of the following methods:
- Acquire from FTP site.
- Build your own Docker image.
Acquire from FTP site
- Download the following files from the FTP site:
x.y.zrefers to the version number (e.g.
tar xvf trifacta-docker-setup-bundle-x.y.z.tar
Files are extracted into a
dockerfolder. Key files:
File Description docker-compose-local-postgres.yaml Runtime configuration file for the Docker image when PostgreSQL is to be running on the same machine. More information is provided below. docker-compose-local-mysql.yaml Runtime configuration file for the Docker image when MySQL is to be running on the same machine. More information is provided below. docker-compose-remote-db.yaml
Runtime configuration file for the Docker image when the database is deployed on a remote server.
NOTE: You must manage this instance of the database.
More information is provided below.
Instructions for running the Alteryx container
NOTE: These instructions are referenced later in this workflow.
Instructions for building the Alteryx container
NOTE: This file does not apply if you are using the provided Docker image.
Load the Docker image into your local Docker environment:
docker load < trifacta-docker-image-x.y.z.tar
Confirm that the image has been loaded. Execute the following command, which should list the Docker image:
You can now configure the Docker image. Please skip that section.
Build your own Docker image
As needed, you can build your own Docker image.
Docker version 17.12 or later. Docker version must be compatible with the following version(s) of Docker Compose.
Docker Compose 1.24.1. It should be compatible with above version of Docker.
Acquire the RPM file from the FTP site:
NOTE: You must acquire the el7 RPM file for this release.
- In your Docker environment, copy the
trifacta-server\*.rpmfile to the same level as the
- Verify that the
docker-filesfolder and its contents are present.
Use the following command to build the image:
docker build -t trifacta/server-enterprise:latest .
This process could take about 10 minutes. When it is completed, you should see the build image in the Docker list of local images.
NOTE: To reduce the size of the Docker image, the Dockerfile installs the trifacta-server RPM file in one stage and then copies over the results to the final stage. The RPM is not actually installed in the final stage. All of the files are properly located.
- You can now configure the Docker image.
Configure Docker Image
Before you start the Docker container, you should review the properties for the Docker image. In the provided image, please open the appropriate
|docker-compose-local-postgres.yaml||Database properties in this file are pre-configured to work with the installed instance of PostgreSQL, although you may wish to change some of the properties for security reasons.|
|docker-compose-local-mysql.yaml||Database properties in this file are pre-configured to work with the installed instance of MySQL, although you may wish to change some of the properties for security reasons.|
The Alteryx databases are to be installed on a remote server that you manage.
NOTE: Additional configuration is required.
NOTE: You may want to create a backup of this file first.
Key general properties:
NOTE: Avoid modifying properties that are not listed below.
|image||This reference must match the name of the image that you have acquired.|
|container_name||Name of container in your Docker environment.|
Defines the listening port for the Designer Cloud application. Default is
NOTE: If you must change the listening port, additional configuration is required after the image is deployed. See Change Listening Port.
These properties pertain to the database installation to which the Designer Cloud application connects.
If set to
NOTE: This step applies only if you are starting the container for the first time, and the databases will be installed locally.
|DB_TYPE||Set this value to |
|DB_HOST_NAME||Hostname of the machine hosting the databases. Leave value as |
(Remote only) Port number to use to connect to the databases. Default is
NOTE: If you are modifying, additional configuration is required after installation is complete. See Change Database Port in the Databases Guide.
Admin username to be used to create DB roles/databases. Modify this value for remote installation.
NOTE: If you are modifying this value, additional configuration is required. Please see the documentation for your database version.
|DB_ADMIN_PASSWORD||Admin password to be used to create DB roles/databases. Modify this value for remote installation.|
If your Hadoop cluster is protected by Kerberos, please review the following properties.
Full path inside of the container where the Kerberos keytab file is located. Default value:
NOTE: The keytab file must be imported and mounted to this location. Configuration details are provided later.
Full path inside of the container where the Kerberos
Hadoop distribution client JARs:
Please enable the appropriate path to the client JAR files for your Hadoop distribution. In the following example, the Cloudera path has been enabled, and the Hortonworks path has been disabled:
# Mount folder from outside for necessary hadoop client jars # For CDH - /opt/cloudera:/opt/cloudera # For HDP #- /usr/hdp:/usr/hdp
Please modify these lines if you are using Hortonworks.
These properties govern where volumes are mounted in the container.
NOTE: These values should not be modified unless necessary.
Full path in container to the Alteryx configuration directory. Default:
Full path in container to the Alteryx logs directory. Default:
Full path in container to the Alteryx license directory. Default:
Start Server Container
After you have performed the above configuration, execute the following to initialize the Docker container:
docker-compose -f <docker-compose-filename>.yaml run trifacta initfiles
When the above is started for the first time, the following directories are created on the localhost:
Used by the Alteryx container to expose the
Import Additional Configuration Files
After you have started the new container, additional configuration files must be imported.
Import license key file
The Alteryx license file must be staged for use by the platform. Stage the file in the following location in the container:
NOTE: If you are using a non-default path or filename, you must update the
Import Hadoop distribution libraries
If the container you are creating is on the edge node of your Hadoop cluster, you must provide the Hadoop libraries.
- You must mount the Hadoop distribution libraries into the container. For more information on the libraries, see the documentation for your Hadoop distribution.
- The Docker Compose file must be made aware of these libraries. Details are below.
Import Hadoop cluster configuration files
Some core cluster configuration files from your Hadoop distribution must be provided to the container. These files must be copied into the following directory within the container:
For more information, see Configure for Hadoop in the Configuration Guide.
Install Kerberos client
If Kerberos is enabled, you must install the Kerberos client and keytab on the node container. Copy the keytab file to the following stage location:
See Configure for Kerberos Integration in the Configuration Guide.
Perform configuration changes as necessary
The primary configuration file for the platform is in the following location in the launched container:
NOTE: Unless you are comfortable working with this file, you should avoid direct edits to it. All subsequent configuration can be applied from within the application, which supports some forms of data validation. It is possible to corrupt the file using direct edits.
Configuration topics are covered later.
Start and Stop the Container
Stops the container but does not destroy it.
NOTE: Application and local database data is not destroyed. As long as the
.yaml properties point to the correct location of the
*-data files, data should be preserved. You can start new containers to use this data, too. Do not change ownership on these directories.
docker-compose -f <docker-compose-filename>.yaml stop
Restarts an existing container.
docker-compose -f <docker-compose-filename>.yaml start
Recreates a container using existing local data.
docker-compose -f <docker-compose-filename>.yaml up --force-recreate -d
Stop and destroy the container
Stops the container and destroys it.
The following also destroys all application configuration, logs, and database data. You may want to back up these directories first.
docker-compose -f <docker-compose-filename>.yaml down
sudo rm -rf trifacta-data/ postgres-data/
Local MySQL or remote database:
sudo rm -rf trifacta-data/
Verify access to the server where the Designer Cloud Powered by Trifacta platform is to be installed.
Cluster Configuration: Additional steps are required to integrate the Designer Cloud Powered by Trifacta platform with the cluster. See Prepare Hadoop for Integration with the Platform in the Planning Guide.
- Start the platform within the container. See Start and Stop the Platform.
After installation is complete, additional configuration is required. You can complete this configuration from within the application.
- Login to the application. See Login.
- The primary configuration interface is the Admin Settings page. From the left menu, select User menu > Admin console > Admin settings. For more information, see Admin Settings Page in the Admin Guide.
- In the Admin Settings page, you should do the following:
- Configure password criteria. See Configure Password Criteria.
- Change the Admin password. See Change Admin Password.
- Workspace-level configuration can also be applied. From the left menu, select User menu > Admin console > Workspace settings. For more information, see Workspace Settings Page in the Admin Guide.
The Designer Cloud Powered by Trifacta platform requires additional configuration for a successful integration with the datastore. Please review and complete the necessary configuration steps. For more information, see Configure in the Configuration Guide.
This page has no comments.