This section describes how to enable the to read sources in Hive and write results back to Hive. - A Hive source is a single table in a selected Hive database.
- Apache Hive is a data warehouse system for managing queries against large datasets distributed across a Hadoop cluster. Queries are managed using HiveQL, a SQL-like querying language. See https://hive.apache.org/.
- The platform can publish results to Hive as part of any normal job or on an ad-hoc basis for supported output formats.
- Hive is also used by the
to publish metadata results. This capability shares the same configuration described below.
Supported Versions: Hive Version | Master namenode | Notes |
---|
Hive 1.x | HiveServer2 | All supported Hadoop deployments | Hive 2.x | HiveServer2 (Interactive) | Supported on HDP 2.6 only. |
Pre-requisitesHiveServer2 and your Hive databases must already be installed in your Hadoop cluster. NOTE: For JDBC interactions, the supports HiveServer2 only. |
- You have verified that Hive is working correctly.
- You have acquired and deployed the
hive-site.xml configuration file into your . See Configure for Hadoop.
Limitations- Only one global connection to Hive is supported.
- Changes to the underlying Hive schema are not picked up by the
and will break the source and datasets that use it. - During import, the JDBC data types from Hive are converted to
. When data is written back to Hive, the original Hive data types may not be preserved. For more information, see Type Conversions. - Publish to unmanaged tables in Hive is supported, except for the following actions:
- Create table
- Drop & load table
- Publish to partitioned tables in Hive is supported.
- The schema of the results and the partitioned table must be the same.
- If they do not match, you may see an SchemaMismatched Exception error in the UI. You can try a drop and republish action on the data. However, the newly generated table does not have partitions.
- For errors publishing to partitioned columns, additional information may be available in the logs.
NOTE: Running user-defined functions for an external service, such as Hive, is not supported from within a recipe step. As a workaround, you may be able to execute recipes containing such external UDFs on the Photon running environment. Performance issues should be expected on larger datasets. |
Configure for HiveHive userThe user with which Hive connects to read from the backend datastore should be a member of the user group or whatever group is used to access storage from the . Verify that the Unix or LDAP group has read access to the Hive warehouse directory. Hive user for Spark: NOTE: If you are executing jobs in the Spark running environment, additional permissions may be required. If the Hive source is a reference or references to files stored elsewhere in backend storage, the Hive user or its group must have read and execute permissions on the source directories or files. |
Enable Data ServiceIn platform configuration, the must be enabled.  Please verify the following: "data-service.enabled": true, |
Locate the Hive JDBC JarIn platform configuration, you must verify that the following parameter is pointing to the proper location for the Hive JDBC JAR file. The example below identfies the location for Cloudera 5.10: NOTE: This parameter varies for each supported distribution and version. |
"data-service.hiveJdbcJar": "hadoop-deps/cdh-5.10/build/libs/cdh-5.10-hive-jdbc.jar", |
Enable Hive Support for Spark Job ServiceIf you are using the Spark running environment for execution and profiling jobs, you must enable Hive support within the Spark Job Service configuration block. NOTE: The Spark running environment is the default running environment. When this change is made, the platform requires that a valid hive-site.xml cluster configuration file be installed on the . |
Steps:  Locate the following setting and verify that it is set to true : "spark-job-service.enableHiveSupport" : true, |
Modify the following parameter to point to the location where Hive dependencies are installed. This example points to the location for Cloudera 5.10: NOTE: This parameter value is distribution-specific. Please update based on your specific distribution. |
"spark-job-service.hiveDependenciesLocation":"%(topOfTree)s/hadoop-deps/cdh-5.10/build/libs", |
- Save your changes.
Enable Hive Database Access for Spark Job ServiceThe Spark Job Services requires read access to the Hive databases. Please verify that the Spark user can access the required Hive databases and tables. For more information, please contact your Hive administrator. Configure managed table formatThe publishes to Hive using managed tables. When writing to Hive, the platform pushes to an externally staged table. Then, from this staging table, the platform selects and inserts into a managed table. By default, the platform published to managed tables in Parquet format. As needed, you can apply the following values into platform configuration to change the format to which the platform writes when publishing a managed table: To change the format, please modify the following parameter. Steps:  Locate the following parameter and modify it using one of the above values:
"data-service.hiveManagedTableFormat": "PARQUET", |
- Save your changes and restart the platform.
Additional configuration for HDP 3.xIf you are integrating with an HDP 3.x cluster, please add the following the Spark Job Service classpath:  Add the following value to Spark Job Service classpath: %(topofTree)/etc/hive/conf/ Example (No LLAP or Hive Warehouse):
 "classpath": "%(topOfTree)/etc/hive/conf:%(topOfTree)s/ \ services/spark-job-server/server/build/libs/ \ spark-job-server-bundle.jar:%(sparkBundleJar)s:/ \ etc/hadoop/conf/:/etc/hive/conf/: \ %(topOfTree)s/%(hadoopBundleJar)s"
- Save your changes. Before restarting the platform, please review the following section.
Additional configuration for Hive 3.0 on HDP 3.xNOTE: Hive 3.0 is supported only on Hortonworks HDP 3.x using the Hive Warehouse Connector to read from Hive. |
Tables in Hive 3.0 are ACID-compliant, transactional tables. Since Spark cannot natively read transactional tables, the must utilize Hive Warehouse Connector to query the Hive 3.0 datastore for tabular data. NOTE: If Ranger is deployed on the cluster, Spark respects any column- or row-level security that Ranger enforces on the Hive tables. Queries for unauthorized data in a table fail in the . |
Please complete the following steps to integrate the with Hive 3.0 through HDP 3.x and LLAP. NOTE: Before you begin, please verify that you have performed the extra configuration for using Spark on HDP 3.x. For more information, see Configure for Spark. |
Steps: 
Enable use of the Hive Warehouse Connector:
"spark-job-service.useHiveWarehouseConnector": true |
Add the Hive Warehouse Connector to the Spark Job Service classpath. Example: NOTE: If you have already configured for HDP 3.x, then the (sparkBundleJar) update below may have already been added. |
"classpath": "%(topOfTree)s/services/spark-job-server/server/build/libs/spark-job-server-bundle.jar:%(sparkBundleJar)s:/etc/hadoop/conf/:/etc/hive/conf/:%(topOfTree)s/%(hadoopBundleJar)s:/usr/hdp/current/hive_warehouse_connector/*" |
The following properties and values must be inserted in the spark.props section: NOTE: These properties must be added to the configuration. They cannot be read from Ambari. |
"spark.datasource.hive.warehouse.load.staging.dir": "/tmp",
"spark.datasource.hive.warehouse.metastoreUri": "thrift://hdp30.example:9083",
"spark.driver.extraLibraryPath": "/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64",
"spark.executor.extraJavaOptions": "-XX:+UseNUMA",
"spark.executor.extraLibraryPath": "/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64",
"spark.hadoop.hive.llap.daemon.service.hosts": "@llap0",
"spark.hadoop.hive.zookeeper.quorum": "hdp30.example:2181",
"spark.sql.hive.hiveserver2.jdbc.url": "jdbc:hive2://hdp30.example:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive",
"spark.sql.hive.hiveserver2.jdbc.url.principal": "hive/_HOST@HORTONWORKS",
"spark.yarn.security.credentials.hiveserver2.enabled": "true",
"spark.yarn.jars": "local:/usr/hdp/current/spark2-client/jars/*"
"spark.driver.extraClassPath": "/usr/hdp/current/spark2-client/jars/guava-14.0.1.jar"
"spark.executor.extraClassPath": "/usr/hdp/current/spark2-client/jars/guava-14.0.1.jar" |
The properties listed below require information from your HDP cluster. For the other properties, please use the listed values, unless otherwise required. Property | Description |
---|
"spark.datasource.hive.metastoreUri" | URI for the Hive metastore. Copy the value from hive.metastore.uris. Example value: thrift://mycluster-1.com:9083 |
| "spark.hadoop.hive.zookeeper.quorum" | A list of Zookeeper hosts used by LLAP. Copy the value from Advanced hive-site in Ambari: hive.zookeeper.quorum | "spark.sql.hive.hiveserver2.jdbc.url" | The URL for HiveServer2 Interactive. In Ambari, copy the value from the following: Services > Hive > Summary > HIVESERVER2 INTERACTIVE JDBC URL. | "spark.sql.hive.hiveserver2.jdbc.url.principal" | This property must be equal to hive.server2.authentication.kerberos.principal . In Ambari, copy the value for this property from the following: . The property value is in hive.server2.authentication.kerberos.principal . |
For more information on these properties, see https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/integrating-hive/content/hive_configure_a_spark_hive_connection.html. Save your changes and restart the platform.
Create Hive ConnectionNOTE: High availability for Hive is supported through configuration of the Hive connection. |
For more information, see Create Hive Connections. Optional ConfigurationDepending on your Hadoop environment, you may need to perform additional configuration to enable connectivity with your Hadoop cluster. Additional Configuration for Secure Environments |