D toc |
---|
When deployed to Microsoft Azure, the
D s platform | ||
---|---|---|
|
D s platform |
---|
Warning |
---|
If you created a new HDI cluster as part of your deployment of the platform from the Azure Marketplace, please skip this section. You may use it as reference in the future. |
Supported Versions
Supported Versions: This release supports integration with HDI 3.5 and HDI 3.6.
Limitations
For this release, the following limitations apply:
The
must be installed on Azure.D s platform HDI does not support the client-server web sockets configuration used by the platform. This limitation results in diminished suggestions prompted by platform activities.
Pre-requisites
This section makes the following assumptions:
- You have installed and configured the
onto an edge node of a pre-existing HDI cluster.D s platform - You have installed WebWASB on the platform edge node.
Before You Begin
Create
D s item | ||
---|---|---|
|
The
D s platform |
---|
UserID:
If possible, please create the user ID (
D s defaultuser | ||
---|---|---|
|
D s defaultuser | ||||
---|---|---|---|---|
|
This user must be created on each data node in the cluster.
This user should belong to the group (
D s defaultuser | ||
---|---|---|
|
D s defaultuser | ||||
---|---|---|---|---|
|
User requirements:
- (if integrating with WASB) Access to WASB
- Permission to run YARN jobs on the cluster.
Steps:
D s config Set the cluster paths in the following locations:
Code Block "hdfs.pathsConfig.fileUpload": "/trifacta/uploads", "hdfs.pathsConfig.dictionaries": "/trifacta/dictionaries", "hdfs.pathsConfig.batchResults": "/trifacta/queryResults",
Warning Do not use the
trifacta/uploads
directory. This directory is used for storing uploads and metadata, which may be used by multiple users. Manipulating files outside of the
can destroy other users' data. Please use the tools provided through the interface for managing uploads to WASB.D s webapp Individual users can configure the output directory where exported results are stored. See Storage Config Page.
- Save your changes.
Acquire cluster configuration files
You must acquire the configuration files from the HDI cluster and apply them to the
D s node |
---|
Tip |
---|
Tip: Later, you configure the platform settings for accessing various components in the cluster. This host, port, and other information is available through these cluster configuration files. |
Steps:
On any edge node of the cluster, acquire the .XML files from the following directory:
Code Block /etc/hadoop/conf
Info NOTE: If you are integrating with an instance of Hive on the cluster, you must also acquire the Hive configuration file:
/etc/hive/conf/hive-site.xml
.These files must be copied to the following directory on the
:D s node Code Block /trifacta/conf/hadoop-site
- Replace any existing files with these files.
Acquire build build number
You must acquire the full version and build number of the underlying Hortonworks distribution. On any of the cluster nodes, navigate to /usr/hdp
. The version and build number is referenced as a directory in this location, named in the following form:
Code Block |
---|
A.B.C.D-X |
For the rest of the configuration, the sample values for HDI 3.6 are referenced. Use the appropriate values for your distribution.
Supported HDInsight Distribution | Short Hortonworks Version | Example Full Hortonworks Version |
---|---|---|
3.5 | 2.5 | 2.5.6.2-9 |
3.6 | 2.6 | 2.6.2.2-5 |
Configure the HDI Cluster
The following configuration sections must be reviewed and completed.
Specify Storage Layer
In the Azure console, you must specify and retain the type of storage to apply. In the Storage tab of the cluster specification, the following storage layers are supported.
Info | |
---|---|
NOTE: After the base storage layer has been defined in the
|
Info | |||||||
---|---|---|---|---|---|---|---|
NOTE: If possible, you should reserve a dedicated cluster for the
|
Tip | ||
---|---|---|
Tip: During installation of the
|
Azure Storage Layer | Description |
| ||||
---|---|---|---|---|---|---|
ADLS Gen2 | ADLS Gen2 storage is not supported for HDInsight clusters. | |||||
ADLS Gen1 | Azure storage leverages WASB, an abstraction layer on top of HDFS. |
| ||||
WASB | Data Lake Store maps to ADLS Gen1 in the
| hdfs |
Specify Protocol
In the Ambari console, you must specify the communication protocol to use in the cluster.
Info | |
---|---|
NOTE: The cluster protocol must match the protocol in use by the
|
Steps:
- In the Ambari console, please migrate to the following location: HDFS > Configs > Advanced > Advanced Core Site > fs.defaultFS.
Set the value according to the following table:
Azure Storage Layer Protocol (fs.defaultFS) value
config valueD s platform Link Azure Storage
wasbs://<containername>@<accountname>.blob.core.windows.net
"webapp.storageProtocol"
:
"wasbs"
,
See Set Base Storage Layer. Data Lake Store
adl://home
"webapp.storageProtocol"
:
"hdfs"
,
See Set Base Storage Layer. - Save your changes.
Define Script Action for domain-joined clusters
If you are integrating with a domain-joined cluster, you must specify a script action to set some permissions on cluster directories.
For more information, see https://docs.microsoft.com/en-us/azure/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.
Steps:
- In the Advanced Settings tab of the cluster specification, click Script actions.
In the textbox, insert the following URL:
Code Block https://raw.githubusercontent.com/trifacta/azure-deploy/master/bin/set-key-permissions.sh
- Save your changes.
Configure the Platform
These changes must be applied after the
D s platform |
---|
Perform base configuration for HDI
Excerpt Include | ||||||
---|---|---|---|---|---|---|
|
Configure High Availability
If you are integrating the platform the HDI cluster with high availability enabled, please complete the following steps so that the platform is aware of the failover nodes
Steps:
D s config Enable high availability feature on the namenode and resourceManager nodes:
Code Block "feature.highAvailability.namenode": true, "feature.highAvailability.resourceManager": true,
For each YARN resource manager, you must configure its high availability settings. The following are two example node configurations, including the default port numbers for HDI:
Tip Tip: Host and port settings should be available in the cluster configuration files you copied to the
. Or you can acquire the settings through the cluster's admin console.D s node Code Block "yarn": { "highAvailability": { "resourceManagers": { "rm1": { "port": <your_cluster_rm1_port>, "schedulerPort": <your_cluster_rm1_scheduler_port>, "adminPort": <your_cluster_rm1_admin_port>, "webappPort": <your_cluster_rm1_webapp_port> }, "rm2": { "port": <your_cluster_rm2_port>, "schedulerPort": <your_cluster_rm2_scheduler_port>, "adminPort": <your_cluster_rm2_admin_port>, "webappPort": <your_cluster_rm2_webapp_port> } } } },
Configure the high availability namenodes. The following example configures two namenodes (
nn1
andnn2
), including the default port numbers for HDI:Tip Tip: Host and port settings should be available in the cluster configuration files you copied to the
. Or you can acquire the settings through the cluster's admin console.D s node Code Block "hdfs": { ... "highAvailability": { "namenodes": { "nn1": { "port": <your_cluster_namenode1_port> }, "nn2": { "port": <your_cluster_namenode2_port> } } ...
- Save your changes.
Info |
---|
NOTE: If you are deploying high availability failover, you must use HttpFS, instead of WebHDFS, for communicating with HDI. Additional configuration is required. HA in a Kerberized environment for HDI is not supported. See Enable Integration with Cluster High Availability. |
Create Hive connection
Limitations:
- The platform only supports HTTP connections to Hive on HDI. TCP connections are not supported.
- The Hive port must be set to
10001
for HTTP.
For more information, see Create Hive Connections.
Hive integration requires additional configuration.
Info |
---|
NOTE: Natively, HDI supports high availability for Hive via a Zookeeper quorum. |
For more information, see Configure for Hive .
Configure for Spark Profiling
If you are using Spark for profiling, you must add environment properties to your cluster configuration. See Configure for Spark.
Configure for UDFs
If you are using user-defined functions (UDFs) on your HDInsight cluster, additional configuration is required. See Java UDFs.
Configure Storage
Before you begin running jobs, you must specify your base storage layer, which can be WASB or ADLS Gen1. For more information, see Set Base Storage Layer.
Additional configuration is required:
- For more information, see Enable WASB Access.
- For more information, see Enable ADLS Gen1 Access.
Starting the Platform
Info |
---|
NOTE: In an Azure HDI environment, you must perform platform start and stop operations from |