The supports integration with a number of Hadoop distributions, using a range of components within each distribution. This page provides information on the set of configuration tasks that you need to complete to integrate the platform with your Hadoop environment.
Base storage layer: You must configure one storage platform to be the base storage layer. Details are described later.
NOTE: Some deployments require that you select a specific base storage layer. |
After you have defined the base storage layer, it cannot be changed. Please review your Storage Deployment Options carefully. The required configuration is described later. |
After the and its databases have been installed, you can perform platform configuration.
NOTE: Some platform configuration is required, regardless of your deployment. See Required Platform Configuration. |
NOTE: Where possible, you should define or select a user with a userID value greater than 1000. In some environments, lower userID values can result in failures when running jobs on Hadoop. |
Set the Hadoop username for the
to use for executing jobs:
"hdfs.username": [hadoop.user], |
If the is installed in a Kerberos environment, additional steps are required, which are described later.
In the sections below are a series of questions about the Hadoop environment with which the is integrating. Based on your answer, additional configuration may be required.
The supports access to the following Hadoop storage layers:
At this time, you should define the base storage layer from the platform. See Set Base Storage Layer.
Required configuration for each type of storage is described below.
If output files are to be written to an HDFS environment, you must configure the to interact with HDFS.
If your deployment is using HDFS, do not use the |
Below, replace the value for with the value appropriate for your environment.
"hdfs": { "username": "[hadoop.user]", ... "namenode": { "host": "hdfs.example.com", "port": 8080 }, }, |
Parameter | Description | |
---|---|---|
username | Username in the Hadoop cluster to be used by the | |
namenode.host | Host name of namenode in the Hadoop cluster. You may reference multiple namenodes. | |
namenode.port | Port to use to access the namenode. You may reference multiple namenodes.
|
Individual users can configure the HDFS directory where exported results are stored.
NOTE: Multiple users cannot share the same home directory. |
See Storage Config Page.
Access to HDFS is supported over one of the following protocols:
WebHDFS
If you are using HDFS, it is assumed that WebHDFS has been enabled on the cluster. Apache WebHDFS enables access to an HDFS instance over HTTP REST APIs. For more information, see https://hadoop.apache.org/docs/r1.0.4/webhdfs.html.
The following properties can be modified:
"webhdfs": { ... "version": "/webhdfs/v1", "host": "", "port": 50070, "httpfs": false }, |
Parameter | Description | |
---|---|---|
version | Path to locally installed version of WebHDFS.
| |
host | Hostname for the WebHDFS service.
| |
port | Port number for WebHDFS. The default value is
| |
httpfs | To use HttpFS instead of WebHDFS, set this value to true . The port number must be changed. See HttpFS below. |
Steps:
webhdfs.host
to be the hostname of the node that hosts WebHDFS. webhdfs.port
to be the port number over which WebHDFS communicates. The default value is 50070
. For SSL, the default value is 50470
.webhdfs.httpfs
to false.hdfs.namenodes
, you must set the host
and port
values to point to the active namenode for WebHDFS.HttpFS
You can configure the to use the HttpFS service to communicate with HDFS, in addition to WebHDFS.
NOTE: HttpFS serves as a proxy to WebHDFS. When HttpFS is enabled, both services are required. |
In some cases, HttpFS is required:
If your environment meets any of the above requirements, you must enable HttpFS. For more information, see Enable HttpFS.
S3
The can integrate with an S3 bucket. See Enable S3 Access.
Configure the following:
"yarn.resourcemanager.host": "hadoop", "yarn.resourcemanager.port": 8032, |
NOTE: Do not modify the other host/port settings unless you have specific information requiring the modifications. |
For more information, see System Ports.
For smaller datasets, the platform recommends using the running environment.
For larger datasets, if the size information is unavailable, the platform recommends by default that you run the job on the Hadoop cluster. For these jobs, the default publishing action for the job is specified to run on the Hadoop cluster, generating the output format defined by this parameter. Publishing actions, including output format, can always be changed as part of the job specification.
As needed, you can change this default format.
"webapp.defaultHadoopFileFormat": "csv", |
Accepted values: csv
, json
, avro
, pqt
For more information, see Run Job Page.
The ships with client bundles supporting a number of major Hadoop distributions. You must configure the jarfile for the distribution to use. These distributions are stored in the following directory:
/opt/trifacta/hadoop-deps
Configure the bundle distribution property (hadoopBundleJar
) in platform configuration. Examples:
Hadoop Distribution | hadoopBundleJar property value |
---|---|
Cloudera |
|
Hortonworks | "hadoop-deps/hdp-x.y/build/libs/hdp-x.y-bundle.jar" |
where:
x.y
is the major-minor build number (e.g. 5.4)
NOTE: The path must be specified relative to the install directory. |
The supports integration with Kerberos security. The platform can utilize Kerberos' secure impersonation to broker interactions with the Hadoop environment.
See Configure for Kerberos Integration.
See Configure for Secure Impersonation.
The can integrate with your SSO platform to manage authentication to the
. See Configure SSO for AD-LDAP.
If you are using Hadoop KMS to encrypt data transfers to and from the Hadoop cluster, additional configuration is required. See Configure for KMS.
Tip: If there is no bundle for the distribution you need, you might try the one that is the closest match in terms of Apache Hadoop baseline. For example, CDH5 is based on Apache 2.3.0, so that client bundle will probably run ok against a vanilla Apache Hadoop 2.3.0 installation. For more information, see |
Some additional configuration is required. See Configure for Cloudera.
After install, integration with the Hortonworks Data Platform requires additional configuration. See Configure for Hortonworks.
Apache Hive is a data warehouse service for querying and managing large datasets in a Hadoop environment using a SQL-like querying language. For more information, see https://hive.apache.org/.
See Configure for Hive.
You can integrate the platform with the Hadoop cluster's high availability configuration, so that the can match the failover configuration for the cluster.
NOTE: If you are deploying high availability failover, you must use HttpFS, instead of WebHDFS, for communicating with HDFS, which is described in a previous section. |
For more information, see Enable Integration with Cluster High Availability.
NOTE: If the |
To enable the platform to use YARN installations, you must provide a set of client *-site.xml
files.
core-site.xml
hdfs-site.xml
httpfs-site.xml
mapred-site.xml
yarn-site.xml
hive-site.xml
NOTE: The above file is required if you are integrating with Hive, using the Spark running environment, or both. For more information, see Configure for Hadoop. |
NOTE: If these configuration files change in the Hadoop cluster, the versions installed on the |
For CDH 5:
In Cloudera Manager, select Actions > Download Client Configuration.
Configuration files are also available in /etc/hadoop/conf
on any cluster edge node.
For HDP 2:
Client configuration files can be retrieved from an existing client node. Acquire *-site.xml
files from /etc/hadoop/conf
.
If you are using Hortonworks, you must complete the following modification to the site configuration file that is hosted on the .
NOTE: Before you begin, you must acquire the full version and build number of your Hortonworks distribution. On any of the Hadoop nodes, navigate to |
In the , edit the following file:
/opt/trifacta/conf/hadoop-site/mapred-site.xml |
Perform the following global search and replace:
Search:
${hdp.version} |
Replace with your hard-coded version and build number:
A.B.C.D-XXXX |
Save the file.
Restart the .
For YARN:
YARN maintains site site configuration files in a similar location. These XML files should be retrieved, too.
After you've collected the Hadoop client configuration, copy all *-site.xml
files to the following:
<installation root>/conf/hadoop-site/
Restart services. See Start and Stop the Platform.
If you are publishing using Snappy compression, you may need to perform the following additional configuration.
Steps:
Verify that the snappy
and snappy-devel
packages have been installed on the . For more information, see https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/NativeLibraries.html.
From the , execute the following command:
hadoop checknative |
libsnappy.so
file. Verify that this file has been installed on all nodes of the cluster, including the Locate the spark.props
configuration block. Insert the following properties and values inside the block:
"spark.driver.extraLibraryPath": "/path/to/file", "spark.executor.extraLibraryPath": "/path/to/file", |
Verify on the that the following locations are available:
NOTE: The asterisk below is a wildcard. Please collect the entire path of both values. |
/hadoop-client/lib/snappy*.jar /hadoop-client/lib/native/ |
Locate the spark.props
configuration block. Insert the following properties and values inside the block:
"spark.driver.extraLibraryPath": "/hadoop-client/lib/snappy*.jar;/hadoop-client/lib/native/", "spark.executor.extraLibraryPath": "/hadoop-client/lib/snappy*.jar;/hadoop-client/lib/native/", |
/tmp
directory has the proper permissions for publication. For more information, see Supported File Formats.You can review system services and download log files through the .