This install process applies to installing
|D s product|
Azure Marketplace deployments:
NOTE: Content in this section does not apply to deployments from the Azure Marketplace, which provide fewer deployment and configuration options. For more information, see the Azure Marketplace.
For more information on deployment scenarios, see Supported Deployment Scenarios for Azure.
For more information on the limitations of this deployment scenario, see Product Limitations.
Depending on which of the following Azure components you are deploying, additional pre-requisites and limitations may apply. Please review these sections as well.
Before you begin, please verify that you have completed the following:
Deploy the Cluster
Deploy and provision a cluster of one of the supported types. The
|D s platform|
NOTE: Before you deploy, you should review cluster sizing options. For guidance, please contact your
Primary storage of the cluster may be set to an existing Azure Data Lake Store or Blob Storage.
- Any additional storage associated with the cluster is not available through the
D s item item application
For more information, see Supported Deployment Scenarios for Azure.
|D s node|
In your Azure infrastructure, you must deploy a suitable VM for the installation of the
|D s platform|
The operating system requirements for the VM for installing the platform vary depending on the type of job execution cluster with which you are running.
|Cluster Type||Supported O/S for VM||Notes|
|Azure Databricks||CentOS and Ubuntu|
- When you configure the platform to integrate with the cluster, you must acquire some information about the cluster resources. For more information on the set of information to collect, see Pre-Install Checklist in the Install Preparation area.
- For more information, see System Requirements in the Install Preparation area.
- A set of ports must be opened on the VM for the platform. For more information, see System Ports in the Install Preparation area.
For more information on the supported EMR distributions, see Supported Deployment Scenarios for Azure.
Prepare the cluster
Create the following directories, which are specified by parameter in the platform.
Default HDFS path Platform configuration property
/trifacta /trifacta/dictionaries hdfs.pathsConfig.dictionaries /trifacta/libraries hdfs.pathsConfig.libraries /trifacta/queryResults hdfs.pathsConfig.batchResults /trifacta/tempfiles hdfs.pathsConfig.tempFiles /trifacta/uploads hdfs.pathsConfig.fileUpload /trifacta/.datasourceCache hdfs.pathsConfig.globalDatasourceCache
Change the ownership of the above directories to
trifacta:trifactaor the corresponding values for the S3 user in your environment.
Please complete these steps listed in order.
1 - Install Software
|D s platform|
NOTE: You must follow the instructions provided for Ubuntu installation.
See Install Software.
2 - Install Databases
The platform requires several databases for storing metadata.
NOTE: The software assumes that you are installing the databases on a PostgreSQL server on the same node as the software. If you are not or are changing database names or ports, additional configuration is required as part of this installation process.
For more information, see Install Databases in the Databases Guide.
3 - Start the platform
For more information, see Start and Stop the Platform.
4 - Login to the Application
After software and databases are installed, you can login to the application to complete configuration. See Login.
As soon as you login, you should change the password on the admin account. In the left menu bar, select Settings > Admin Settings. Scroll down to Manage Users. For more information, see Change Admin Password.
Tip: At this point, you can access the online documentation through the application. In the left menu bar, select Help menu > Product Docs. All of the following content, plus updates, is available online. See Documentation below.
After you have completed the above topics, you can complete the configuration for your deployment below.
NOTE: The following configuration topics are not part of this installation guide. You should log in to the application and access the links below.
- Configure for Azure: Configure the platform to work with Azure.
- Integrate with cluster: If the application is up and running, you can configure to the backend cluster for running jobs. Choose one of the following:
- Azure Databricks
- Integrate with backend storage:
- Set base storage layer: The base storage layer must be set at the time of install and cannot be changed. See Set Base Storage Layer.
- Verify operations: At this point, you should be able to run a job. See Verify Operations.
- Create additional connections: Through connections, you can access other sources of data and, optionally, publish job results.
You can access complete product documentation online and in PDF format. From within the product, select Help menu > Product Docs.The following configuration topics are relevant to Azure deployments. Please review them in order.
|Supported Deployment Scenarios for Azure||Matrix of supported Azure components.|
Top-level configuration topic on integrating the platform with Azure.
|Configure for HDInsight|
Review this section if you are integrating the
|Configure for Azure Databricks||Review this section if you are integrating with a pre-existing Azure Databricks cluster.|
|Enable ADLS Access||Configuration to enable access to ADLS.|
|Enable WASB Access||Configuration to enable access to WASB.|
|Verify Operations||You should be able to verify platform operations by running a simple job at this time.|
To enable, see Enable Relational Connections.
Azure-specific relational connections:
|Configure SSO for Azure AD|
How to integrate the