Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version r0682

D toc

When deployed to Microsoft Azure, the 

D s platform
rtrue
 must be integrated with Microsoft HDInsight, a Hadoop-based platform for data storage and analytics. This section describes the configuration steps required to integrate with a pre-existing HDI cluster.  This section applies only if you have installed the
D s platform
onto a node of a pre-existing HDI cluster.
Warning

If you created a new HDI cluster as part of your deployment of the platform from the Azure Marketplace, please skip this section. You may use it as reference in the future.

Supported Versions

Supported Versions: This release supports integration with HDI 3.5 and HDI 3.6.

Limitations

For this release, the following limitations apply:

  • The 

    D s platform
     must be installed on Azure.

  • HDI does not support the client-server web sockets configuration used by the platform. This limitation results in diminished suggestions prompted by platform activities.

Pre-requisites

This section makes the following assumptions:

  1. You have installed and configured the
    D s platform
     onto an edge node of a pre-existing HDI cluster.
  2. You have installed WebWASB on the platform edge node.

Before You Begin

Create 
D s item
itemuser
 account on HDI cluster

The 

D s platform
 interacts with the cluster through a single system user account.  A user for the platform must be added to the cluster.

UserID:

If possible, please create the user ID (

D s defaultuser
Typehdi
) as: 
D s defaultuser
Typehdi
Valuetrue
.

This user must be created on each data node in the cluster.

This user should belong to the group (

D s defaultuser
Typehdi.group
):
D s defaultuser
Typehdi.group
Valuetrue
.

User requirements:

  • (if integrating with WASB) Access to WASB
  • Permission to run YARN jobs on the cluster. 

Steps:

  1. D s config
  2. Set the cluster paths in the following locations:

    Code Block
    "hdfs.pathsConfig.fileUpload": "/trifacta/uploads",
    "hdfs.pathsConfig.dictionaries": "/trifacta/dictionaries",
    "hdfs.pathsConfig.batchResults": "/trifacta/queryResults",
    Warning

    Do not use the trifacta/uploads directory. This directory is used for storing uploads and metadata, which may be used by multiple users. Manipulating files outside of the

    D s webapp
    can destroy other users' data. Please use the tools provided through the interface for managing uploads to WASB.

    Individual users can configure the output directory where exported results are stored. See Storage Config Page.

  3. Save your changes.

Acquire cluster configuration files  

You must acquire the configuration files from the HDI cluster and apply them to the 

D s node
.

Tip

Tip: Later, you configure the platform settings for accessing various components in the cluster. This host, port, and other information is available through these cluster configuration files.


Steps:

  1. On any edge node of the cluster, acquire the .XML files from the following directory:

    Code Block
    /etc/hadoop/conf
    Info

    NOTE: If you are integrating with an instance of Hive on the cluster, you must also acquire the Hive configuration file: /etc/hive/conf/hive-site.xml.

  2. These files must be copied to the following directory on the 

    D s node
    :

    Code Block
    /trifacta/conf/hadoop-site
  3. Replace any existing files with these files.

Acquire build build number

You must acquire the full version and build number of the underlying Hortonworks distribution. On any of the cluster nodes, navigate to /usr/hdp. The version and build number is referenced as a directory in this location, named in the following form:

Code Block
A.B.C.D-X

For the rest of the configuration, the sample values for HDI 3.6 are referenced. Use the appropriate values for your distribution.

Supported HDInsight DistributionShort Hortonworks VersionExample Full Hortonworks Version
3.52.52.5.6.2-9
3.62.62.6.2.2-5

Configure the HDI Cluster

The following configuration sections must be reviewed and completed. 

Specify Storage Layer

In the Azure console, you must specify and retain the type of storage to apply. In the Storage tab of the cluster specification, the following storage layers are supported.

Info

NOTE: After the base storage layer has been defined in the

D s platform
, it cannot be changed. Reinstallation is required.

Info

NOTE: If possible, you should reserve a dedicated cluster for the

D s platform
processing. If there is a mismatch between the storage layer of your existing HDI cluster and the required storage for your
D s item
deployment
deployment
, you can create a new HDI cluster as part of the installation process. For more information, see Install for Azure.

Tip

Tip: During installation of the

D s platform
, you must define the base storage layer. Retain your selection of the Azure Storage Layer and its mapped based storage layer for the
D s platform
installation.

Azure Storage LayerDescription

D s item
itemBase Storage Layer

ADLSAzure storage leverages WASB, an astraction layer on top of HDFS.

wasbs

WASB

Data Lake Store maps to ADLS in the

D s platform
.

hdfs
ADLS Gen2ADLS Gen2 storage is not supported for HDInsight clusters.

Specify Protocol

In the Ambari console, you must specify the communication protocol to use in the cluster. 

Info

NOTE: The cluster protocol must match the protocol in use by the

D s platform
.

Steps:

  1. In the Ambari console, please migrate to the following location: HDFS > Configs > Advanced > Advanced Core Site > fs.defaultFS.
  2. Set the value according to the following table:

    Azure Storage LayerProtocol (fs.defaultFS) value

    D s platform
    config value

    Link
    Azure Storagewasbs://<containername>@<accountname>.blob.core.windows.net "webapp.storageProtocol" "wasbs",See Set Base Storage Layer.
    Data Lake Storeadl://home "webapp.storageProtocol" "hdfs",See Set Base Storage Layer.
  3. Save your changes.

Define Script Action for domain-joined clusters

If you are integrating with a domain-joined cluster, you must specify a script action to set some permissions on cluster directories. 

For more information, see https://docs.microsoft.com/en-us/azure/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.

Steps:

  1. In the Advanced Settings tab of the cluster specification, click Script actions.
  2. In the textbox, insert the following URL:

    Code Block
    https://raw.githubusercontent.com/trifacta/azure-deploy/master/bin/set-key-permissions.sh
  3. Save your changes.

Configure the Platform

These changes must be applied after the  

D s platform
has been installed.

Perform base configuration for HDI

Excerpt Include
Azure Install Base Configure Platform for HDI
Azure Install Base Configure Platform for HDI
nopaneltrue

Configure High Availability

If you are integrating the platform the HDI cluster with high availability enabled, please complete the following steps so that the platform is aware of the failover nodes

Steps:

  1. D s config

  2. Enable high availability feature on the namenode and resourceManager nodes:

    Code Block
    "feature.highAvailability.namenode": true,
    "feature.highAvailability.resourceManager": true,
  3. For each YARN resource manager, you must configure its high availability settings. The following are two example node configurations, including the default port numbers for HDI:

    Tip

    Tip: Host and port settings should be available in the cluster configuration files you copied to the

    D s node
    . Or you can acquire the settings through the cluster's admin console.

    Code Block
      "yarn": {
        "highAvailability": {
          "resourceManagers": {
            "rm1": {
              "port": <your_cluster_rm1_port>,
              "schedulerPort": <your_cluster_rm1_scheduler_port>,
              "adminPort": <your_cluster_rm1_admin_port>,
              "webappPort": <your_cluster_rm1_webapp_port>
            },
            "rm2": {
              "port": <your_cluster_rm2_port>,
              "schedulerPort": <your_cluster_rm2_scheduler_port>,
              "adminPort": <your_cluster_rm2_admin_port>,
              "webappPort": <your_cluster_rm2_webapp_port>
            }
          }
        }
      },
  4. Configure the high availability namenodes. The following example configures two namenodes (nn1 and nn2), including the default port numbers for HDI: 

    Tip

    Tip: Host and port settings should be available in the cluster configuration files you copied to the

    D s node
    . Or you can acquire the settings through the cluster's admin console.

    Code Block
      "hdfs": {
        ...
        "highAvailability": {
          "namenodes": {
            "nn1": {
              "port": <your_cluster_namenode1_port>
            },
            "nn2": {
              "port": <your_cluster_namenode2_port>
            }
          }
        ...
  5. Save your changes.
Info

NOTE: If you are deploying high availability failover, you must use HttpFS, instead of WebHDFS, for communicating with HDI. Additional configuration is required. HA in a Kerberized environment for HDI is not supported. See Enable Integration with Cluster High Availability.

Create Hive connection

Limitations:

  1. The platform only supports HTTP connections to Hive on HDI. TCP connections are not supported.
  2. The Hive port must be set to 10001 for HTTP.

For more information, see Create Hive Connections.

Hive integration requires additional configuration.

Info

NOTE: Natively, HDI supports high availability for Hive via a Zookeeper quorum.

For more information, see Configure for Hive .

Configure for Spark Profiling

If you are using Spark for profiling, you must add environment properties to your cluster configuration. See Configure for Spark.

Configure for UDFs

If you are using user-defined functions (UDFs) on your HDInsight cluster, additional configuration is required. See Java UDFs.

Configure Storage

Before you begin running jobs, you must specify your base storage layer, which can be WASB or ADLS. For more information, see Set Base Storage Layer.

Additional configuration is required:

Starting the Platform

Info

NOTE: In an Azure HDI environment, you must perform platform start and stop operations from /opt/trifacta. Running these commands from other directories, such as /root, can cause service issues.