...
Restart services. See Start and Stop the Platform.
Configure Snappy publication
If you are publishing using Snappy compression, you may need to perform the following additional configuration.
Steps:
Verify that the
snappy
andsnappy-devel
packages have been installed on the
. For more information, see https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/NativeLibraries.html.D s node From the
, execute the following command:D s node Code Block hadoop checknative
- The above command identifies where the native libraries are located on the
.D s node - Cloudera:
- On the cluster, locate the
libsnappy.so
file. Verify that this file has been installed on all nodes of the cluster, including the
. Retain the path to the file on theD s node
.D s node D s config Locate the
spark.props
configuration block. Insert the following properties and values inside the block:Code Block "spark.driver.extraLibraryPath": "/path/to/file", "spark.executor.extraLibraryPath": "/path/to/file",
- On the cluster, locate the
- Hortonworks:
Verify on the
that the following locations are available:D s node Info NOTE: The asterisk below is a wildcard. Please collect the entire path of both values.
Code Block /hadoop-client/lib/snappy*.jar /hadoop-client/lib/native/
D s config Locate the
spark.props
configuration block. Insert the following properties and values inside the block:Code Block "spark.driver.extraLibraryPath": "/hadoop-client/lib/snappy*.jar;/hadoop-client/lib/native/", "spark.executor.extraLibraryPath": "/hadoop-client/lib/snappy*.jar;/hadoop-client/lib/native/",
- Save your changes and restart the platform.
- Verify that the
/tmp
directory has the proper permissions for publication. For more information, see Supported File Formats.
Debugging
You can review system services and download log files through the
D s webapp |
---|
...