Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version next

D toc

This section provides additional configuration requirements for integrating the 

D s platform
rtrue
 with the Hortonworks Data Platform. 

Info

NOTE: Except as noted, the following configuration items apply to the latest supported version of Hortonworks Data Platform.

Pre-requisites

Before you begin, it is assumed that you have completed the following tasks:

  1. Successfully installed a supported version of Hortonworks Data Platform into your enterprise infrastructure.
  2. Installed the 
    D s item
    itemsoftware
     in your environment. For more information, see Install Process for On-Premises.
  3. Reviewed the mechanics of platform configuration. See Required Platform Configuration.
  4. Configured access to the
    D s item
    itemdatabase
    . See Configure the Databases.
  5. Performed the basic Hadoop integration configuration. See Configure for Hadoop.
  6. You have access to platform configuration either via the 

    D s item
    itemnode
     or through the Admin Settings page. 

Hortonworks Cluster Configuration

The following changes need to be applied to Hortonworks cluster configuration files or to configuration areas inside Ambari.

Tip

Tip: Ambari is the recommended method for configuring your Hortonworks cluster.

Configure for Ranger

Configure Ranger to use Kerberos

If you have deployed Ranger in a Kerberized environment, you must verify and complete the following changes in Ambari.

Steps: 

  1. If you have enabled Ranger, navigate to Configs > Settings.
    1. Choose Authorization: Ranger.
    2. Hiveserver2 Authentication: Kerberos.
  2. If you have enabled Ranger and Hive, navigate to Configs > Advanced > General.
    1. hive.security.authorization.manager: org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory
  3. If you have enabled Hive, navigate to Configs > Advanced > Advanced hive-site.
    1. hive.security.authentication.manager: org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator
    2. hive.metastore.sasl.enabled: true
    3. hive.conf.restricted.list: hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role,hive.security.authorization.enabled
  4. If you have enabled Hive, navigate to Configs > Advanced > Custom hive-site.
    1. hadoop.proxyuser.trifacta.groups: 
      D s defaultuser
      Typehadoop.group
      Fulltrue
    2. hadoop.proxyuser.trifacta.hosts: *
    3. hive2.jdbc.url:<your_jdbc_url>
  5. Save your configuration changes.

Configure for Spark Profiling

If you are using Spark for profiling, you must add environment properties to your cluster configuration. See Configure for Spark.

Additional configuration for Spark profiling on S3

If you are using S3 as your datastore and have enabled Spark profiling, you must apply the following configuration, which adds the hadoop-aws JAR and the aws-java-sdk JAR to the extra class path for Spark. 

Steps:

  1. In Ambari, navigate to Spark > Configs > Advanced.
  2. Add a new parameter to custom Spark defaults.
  3. Set the parameter as follows, which is specified for HDP 2.5.3.0, build 37:

    Code Block
    spark.driver.extraClassPath=/usr/hdp/2.5.3.0-37/hadoop/hadoop-aws-2.7.3.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/hadoop/lib/aws-java-sdk-s3-1.10.6.jar
  4. Restart Spark from Ambari.
  5. Restart the 
    D s platform
    itemnode
    .
Excerpt

Set up directory permissions

On all Hortonworks cluster nodes, verify that the YARN user has access to the YARN working directories: 

Code Block
chown yarn:hadoop /mnt/hadoop/yarn

If you are upgrading from a previous version of Hortonworks, you may need to clear the YARN user cache for the

D s defaultuser
Typehadoop
Fulltrue
user:

Code Block
rm -rf /mnt/hadoop/yarn/local/usercache/trifacta

Configure 
D s platform

The following changes need to be applied to the 

D s item
itemnode
.

Except as noted, these changes are applied to the following file in the 

D s item
itemdeployment
:

D s triconf
pathtrue

Configure WebHDFS port

  1. D s config

  2.  WebHDFS: Verify that the port number for WebHDFS is correct:

    Code Block
    "webhdfs.port": <webhdfs_port_num>,
  3. Save your changes.

Configure Resource Manager port

Hortonworks uses a custom port number for Resource Manager. You must update the setting for the port number used by Resource Manager.

D s config

Info

NOTE: By default, Hortonworks uses 8050 for Resource Manager. Please verify that you have the correct port number.

Code Block
"yarn.resourcemanager.port": 8032,

Save your changes.

Configure location of Hadoop bundle JAR

  1. Set the value for the Hadoop bundle JAR to the appropriate distribution. The following is for Hortonworks 2.6:

    Code Block
    "hadoopBundleJar": "hadoop-deps/hdp-2.6/build/libs/hdp-2.6-bundle.jar"
  2. Save your changes.

Configure Hive Locations

If you are enabling an integration with Hive on the Hadoop cluster, there are some distribution-specific parameters that must be set. For more information, see Configure for Hive.

Restart

To apply your changes, restart the platform. See Start and Stop the Platform.

After restart, you should verify operations. For more information, see Verify Operations.