D toc |
---|
This section provides additional configuration requirements for integrating the
D s platform | ||
---|---|---|
|
- This section applies only to the versions of HDP that
supports. For more information, see Supported Deployment Scenarios for Hortonworks.D s company
Info |
---|
NOTE: Except as noted, the following configuration items apply to the latest supported version of Hortonworks Data Platform. |
Pre-requisites
Before you begin, it is assumed that you have completed the following tasks:
- Successfully installed a supported version of Hortonworks Data Platform into your enterprise infrastructure.
- Installed the
in your environment. For more information, see Install Process for On-Premises.D s item item software - Reviewed the mechanics of platform configuration. See Required Platform Configuration.
- Configured access to the
. See Configure the Databases.D s item item database - Performed the basic Hadoop integration configuration. See Configure for Hadoop.
You have access to platform configuration either via the
or through the Admin Settings page.D s item item node
Hortonworks Cluster Configuration
The following changes need to be applied to Hortonworks cluster configuration files or to configuration areas inside Ambari.
Tip |
---|
Tip: Ambari is the recommended method for configuring your Hortonworks cluster. |
Configure for Ranger
Configure Ranger to use Kerberos
If you have deployed Ranger in a Kerberized environment, you must verify and complete the following changes in Ambari.
Steps:
- If you have enabled Ranger, navigate to Configs > Settings.
- Choose Authorization: Ranger.
- Hiveserver2 Authentication: Kerberos.
- If you have enabled Ranger and Hive, navigate to Configs > Advanced > General.
- hive.security.authorization.manager: org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory
- If you have enabled Hive, navigate to Configs > Advanced > Advanced hive-site.
- hive.security.authentication.manager: org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator
- hive.metastore.sasl.enabled: true
- hive.conf.restricted.list: hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role,hive.security.authorization.enabled
- If you have enabled Hive, navigate to Configs > Advanced > Custom hive-site.
- hadoop.proxyuser.trifacta.groups:
D s defaultuser Type hadoop.group Full true - hadoop.proxyuser.trifacta.hosts: *
- hive2.jdbc.url:<your_jdbc_url>
- hadoop.proxyuser.trifacta.groups:
- Save your configuration changes.
Configure for Spark Profiling
If you are using Spark for profiling, you must add environment properties to your cluster configuration. See Configure for Spark.
Additional configuration for Spark profiling on S3
If you are using S3 as your datastore and have enabled Spark profiling, you must apply the following configuration, which adds the hadoop-aws
JAR and the aws-java-sdk
JAR to the extra class path for Spark.
Steps:
- In Ambari, navigate to Spark > Configs > Advanced.
- Add a new parameter to custom Spark defaults.
Set the parameter as follows, which is specified for HDP 2.5.3.0, build 37:
Code Block spark.driver.extraClassPath=/usr/hdp/2.5.3.0-37/hadoop/hadoop-aws-2.7.3.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/hadoop/lib/aws-java-sdk-s3-1.10.6.jar
- Restart Spark from Ambari.
- Restart the
.D s platform item node
Excerpt | |||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Set up directory permissionsOn all Hortonworks cluster nodes, verify that the YARN user has access to the YARN working directories:
If you are upgrading from a previous version of Hortonworks, you may need to clear the YARN user cache for the
Configure
The following changes need to be applied to the
Except as noted, these changes are applied to the following file in the
Configure WebHDFS port
Configure Resource Manager portHortonworks uses a custom port number for Resource Manager. You must update the setting for the port number used by Resource Manager.
Save your changes. Configure location of Hadoop bundle JAR
Configure Hive LocationsIf you are enabling an integration with Hive on the Hadoop cluster, there are some distribution-specific parameters that must be set. For more information, see Configure for Hive. |
Restart
To apply your changes, restart the platform. See Start and Stop the Platform.
After restart, you should verify operations. For more information, see Verify Operations.