Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. D s config

  2. Verify that the Spark master property is set accordingly:

    Code Block
    "spark.master": "yarn",


  3. Review and set the following parameter based on your Hadoop distribution:

    Hadoop DistributionParameter ValueValue is required?
    CDH 6.x"spark.useVendorSparkLibraries": true,Yes. Additional configuration is in the next section.
    HDP 3.x"spark.useVendorSparkLibraries": true,Yes. Additional configuration is in the next section.
    CDH 5.x and earlier"spark.useVendorSparkLibraries": false,N
    HDP 2.x and earlier"spark.useVendorSparkLibraries": false,N


  4. Locate the following setting:

     

    Code Block
    "spark.version"


  5. Set the above value based on your Hadoop distribution in use:

    Hadoop Distributionspark.versionNotes
    CDH 6.1 - CDH 6.32.4.4Platform must use native Hadoop libraries from the cluster. Additional configuration is required. See below.
    CDH 5.16.xA supported version of Spark 
    HDP 3.12.3.0Platform must use native Hadoop libraries from the cluster. Additional configuration is required. See below.
    HDP 3.02.3.0Platform must use native Hadoop libraries from the cluster. Additional configuration is required. See below.
    HDP 2.6.xA supported version of Spark 
    HDP 2.5.xA supported version of Spark 


Acquire native libraries from the cluster

Info

NOTE: If the

D s node
is installed on an edge node of the cluster, you may skip this section.

You must acquire native Hadoop libraries from the cluster if you are using any of the following versions:

Hadoop versionLibrary location on cluster

D s node
location

Cloudera 6.0 or later/opt/cloudera/parcels/CDHSee section below.
Hortonworks 3.0 or later/usr/hdp/currentSee section below.

Info

NOTE: Whenever the Hadoop distribution is upgraded on the cluster, the new versions of these libraries must be recopied to the following locations on the

D s node
. This maintenance tasks is not required in the
D s node
is an edge node of the cluster.

For more information on acquiring these libraries, please see the documentation provided with your Hadoop distribution.

Use native libraries on Cloudera 6.0 and later

To integrate with CDH 6.x, the platform must use the native Spark libraries. Please add the following properties to your configuration. 

Steps:

  1. D s config
  2. Set sparkBundleJar to the following:

    Code Block
    "sparkBundleJar":"/opt/cloudera/parcels/CDH/lib/spark/jars/*:/opt/cloudera/parcels/CDH/lib/spark/hive/*"


  3. For the Spark Job Service, the Spark bundle JAR must be added to the classpath:

    Info

    NOTE: The key modification is to remove the topOfTree element from the sparkBundleJar entry.


    Code Block
    "spark-job-service": {
        "classpath": "%(topOfTree)s/services/spark-job-server/server/build/libsinstall/spark-job-server-bundle.jarserver/lib/*:%(topOfTree)s/conf/hadoop-site/:/usr/lib/hdinsight-datalake/*:%(sparkBundleJar)s:%(topOfTree)s/%(hadoopBundleJar)s"
      },


  4. In the spark.props section, add the following property:

    Code Block
    "spark.yarn.jars":"local:/opt/cloudera/parcels/CDH/lib/spark/jars/*,local:/opt/cloudera/parcels/CDH/lib/spark/hive/*",


  5. Save your changes.

Use native libraries on Hortonworks 3.0 and later

To integrate with HDP 3.x, the platform must use the native Spark libraries. Please add the following properties to your configuration. 

Steps:

  1. Set the Hadoop bundle JAR to point to the one provided with your distribution. The example below points to HDP 3.0:

    Code Block
     "hadoopBundleJar": "hadoop-deps/hdp-3.0/build/libs/hdp-3.0-bundle.jar"


  2. Enable use of native libraries:

     

    Code Block
    "spark.useVendorSparkLibraries": true,


  3. Set the path to the Spark bundle JAR:

    Code Block
    "sparkBundleJar": "/usr/hdp/current/spark2-client/jars/*"


  4. Add the Spark bundle JAR to the Spark Job Service classpath (spark-job-service.classpath). Example:

    Code Block
    classpath: "%(topOfTree)s/services/spark-job-server/server/build/install/server/lib/*:%(topOfTree)s/conf/hadoop-site/:%(topOfTree)s/%(sparkBundleJar)s:%(topOfTree)s/%(hadoopBundleJar)s"


  5. The following property needs to be added or updated in spark.props. If there are other values in this property, the following value must appear first in the list:

    Code Block
    "spark.executor.extraClassPath": "/usr/hdp/current/spark2-client/jars/guava-14.0.1.jar"


  6. The following property needs to be added or updated to spark.props. These do not need to be listed in a specific order:

    Code Block
    "spark.yarn.jars": "local:/usr/hdp/current/spark2-client/jars/*"


  7. Save your changes and restart the platform.

Modify Spark version and Java JDK version

The

D s platform
defaults to using Spark 2.3.0. Depending on the version of your Hadoop distribution, you may need to modify the version of Spark that is used by the platform.

In the following table, you can review the Spark/Java version requirements for the

D s node
 hosting
D s product
.

To change the version of Spark in use by the

D s platform
, you change the value of the spark.version property, as listed below. No additional installation is required.

Additional requirements:

  • The supported cluster must use Java JDK 1.8.


  • If the platform is connected to an EMR cluster, you must set the local version of Spark (spark.version property) to match the version of Spark that is used on the EMR cluster. 
D s config


Spark 2.2.0

Info

NOTE: This version of Spark is required for CDH 6.0, which is deprecated. CDH 6.1 and later can use other versions.


Required Java JDK VersionJava JDK 1.8

Spark for

D s product


Code Block
"spark.version": "2.2.0",


 

With EMR:

Supported version(s) of EMREMR 5.8, EMR 5.9, and EMR 5.10

Spark for

D s product
with EMR

Spark version must match the Spark version on the EMR cluster:

Code Block
"spark.version": "2.2.0",



Spark 2.2.1

Required Java JDK VersionJava JDK 1.8

Spark for

D s product

Not supported.



With EMR:

Supported version(s) of EMREMR 5.11 and EMR 5.12

Spark for

D s product
with EMR

Spark version must match the Spark version on the EMR cluster:

Code Block
"spark.version": "2.2.1",



Info

NOTE: Spark-job-service fails to start when spark.version is set to 2.2.1. Since spark-job-service is not required for EMR, the fix is to set "spark-job-service.enabled"=false.


Spark 2.3.0

Tip

Tip: This version of Spark is required for HDP 3.x.


Required Java JDK VersionJava JDK 1.8

Spark for

D s product


Code Block
"spark.version": "2.3.0",




With EMR:

Supported version(s) of EMREMR 5.13 - EMR 5.19

Spark for

D s product
with EMR

Spark version must match the Spark version on the EMR cluster:

Code Block
"spark.version": "2.3.0",



Spark 2.4.4


Info

NOTE: For Azure Databricks, you must provide the specific version of Spark through a different configuration parameter. See Configure for Azure Databricks.



Required Java JDK VersionJava JDK 1.8

Spark for

D s product


Code Block
"spark.version": "2.4.4",


Info

NOTE: For Spark 2.4.0 and later, please verify that the following is set:

Code Block
"spark.useVendorSparkLibraries": true,




With EMR:

Supported version(s) of EMREMR 5.20 - EMR 5.27

Spark for

D s product
with EMR

Spark version must match the Spark version on the EMR cluster.


Additional Configuration

Restart Platform

You can restart the platform now. See Start and Stop the Platform.

Verify Operations

At this point, you should be able to run a job in the platform, which launches a Spark execution job and a profiling. Results appear normally in the

D s webapp
.

Steps:

To verify that the Spark running environment is working:

  1. After you have applied changes to your configuration, you must restart services. See Start and Stop the Platform.
  2. Through the application, run a simple job, including visual profiling. Be sure to select Spark as the running environment.
  3. The job should appear as normal in the Job Status page.
  4. To verify that it ran on Spark, open the following file:
    /opt/trifacta/logs/batch-job-runnner.log
  5. Search the log file for a SPARK JOB INFO block with a timestamp corresponding to your job execution.
  6. See below for information on how to check the job-specific logs.
  7. Review any errors.

For more information, see Verify Operations.

Logs

Service logs:

Logs for the Spark Job Service are located in the following location:

/opt/trifacta/logs/spark-job-service.log

Additional log information on the launching of profile jobs is located here:

/opt/trifacta/logs/batch-job-runner.log

Job logs:

When profiling jobs fail, additional log information is written to the following:

/opt/trifacta/logs/jobs/<job-id>/spark-job.log

Troubleshooting

Below is a list of common errors in the log files and their likely causes.

Problem - Spark job succeeds on the cluster but is reported as failed in the application. Spark Job Service is constantly dying.

Whenever a Spark job is executed, it is reported back as having failed. On the cluster, the job appears to have succeeded. However, in the Spark Job Service logs, the Spark Job Service cannot find any of the applications that it has submitted to resource manager.

In this case, the root problem is that Spark is unable to delete temporary files after the job has completed execution. During job execution, a set of ephemeral files may be written to the designated temporary directory on the cluster, which is typically /trifacta/tempfiles. In most cases, these files are removed transparent to the user. 

  • This location is defined in the hdfs.pathsConfig.tempFiles parameter in 
    D s triconf
    .

In some cases, those files may be left behind. To account for this accumulation in the directory, the 

D s platform
 performs a periodic cleanup operation to remove temp files that are over a specified age. 

  • The age in days is defined in the the job.tempfiles.cleanup.age parameter in 
    D s triconf
    .

This cleanup operation can fail if HDFS is configured to send Trash to an encrypted zone. The HDFS API does not support the skipTrash option, which is available through the HDFS CLI. In this scenario, the temp files are not successfully removed, and the files continue to accumulate without limit in the temporary directory. Eventually, this accumulation of files can cause the Spark Job Service to crash with Out of Memory errors.

Solution

The following are possible solutions:

  1. Solution 1: Configure HDFS to use an unencrypted zone for Trash files.
  2. Solution 2: 
    1. Disable temp file cleanup in 

      D s triconf
      :

      Code Block
      "job.tempfiles.cleanup.age": 0,


    2. Clean up the tempfiles directory using an external process.

Problem - Spark Job Service fails to start with a "Exception in thread "main" com.fasterxml.jackson.databind.JsonMappingException: Jackson version is too old 2.5.3" error.

Spark job service fails to start with an error similar to the following in the spark-job-service.log file:

Code Block
Exception in thread "main" com.fasterxml.jackson.databind.JsonMappingException: Jackson version is too old 2.5.3

Solution

Some versions of the hadoopBundleJar contain older versions of the Jackson dependencies, which break the spark-job-service.

To ensure that the spark-job-service is provided the correct Jackson dependency versions, the sparkBundleJar must be listed before the hadoopBundleJar in the spark-job-service.classpath, which is inserted as a parameter in

D s triconf
. Example:

Code Block
"spark-job-service.classpath": "%(topOfTree)s/services/spark-job-server/server/build/install/server/lib/*:%(topOfTree)s/conf/hadoop-site/:%(topOfTree)s/%(sparkBundleJar)s:%(topOfTree)s/%(hadoopBundleJar)s"

Problem - Spark jobs fail with "Unknown message type: -22" error

Spark jobs may fail with the following error in the YARN application logs:

Code Block
ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Unable to create executor due to Unable to register with external shuffle server due to : java.lang.IllegalArgumentException: Unknown message type: -22
at org.apache.spark.network.shuffle.protocol.BlockTransferMessage$Decoder.fromByteBuffer(BlockTransferMessage.java:67)

Solution

This problem may occur if Spark authentication is disabled on the Hadoop cluster but enabled in the 

D s platform
. Spark authentication must match on the cluster and the platform. 

Steps:

  1. D s config
  2. Locate the spark.props entry. 
  3. Insert the following property and value:

    Code Block
    "spark.authenticate": "false"


  4. Save your changes and restart the platform.

Problem - Spark jobs fail when Spark Authentication is enabled on the Hadoop cluster

When Spark authentication is enabled on the Hadoop cluster, Spark jobs can fail. The YARN log file message looks something like the following:

Code Block
17/09/22 16:55:42 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, example.com, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Unable to create executor due to Unable to register with external shuffle server due to : java.lang.IllegalStateException: Expected SaslMessage, received something else (maybe your client does not have SASL enabled?)
at org.apache.spark.network.sasl.SaslMessage.decode(SaslMessage.java:69)

Solution

When Spark authentication is enabled on the Hadoop cluster, the 

D s platform
 must also be configured with Spark authentication enabled.

  1. D s config
  2. Inside the spark.props entry, insert the following property value:

    Code Block
    "spark.authenticate": "true"


  3. Save your changes and restart the platform.

Problem - Job fails with "Required executor memory MB is above the max threshold MB of this cluster" error

When executing a job through Spark, the job may fail with the following error in the spark-job-service.log:

Code Block
Required executor memory (6144+614 MB) is above the max threshold (1615 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.

Solution

The per-container memory allocation in Spark (spark.executor.memory and spark.driver.memory) must not exceed the YARN thresholds. See Sparktuningproperties above.

Problem - Job fails with ask timed out error

When executing a job through Spark, the job may fail with the following error in the spark-job-service.log file:

Code Block
Job submission failed
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://SparkJobServer/user/ProfileLauncher#1213485950]] after [20000 ms]

There is a 20-second timeout on the attempt to submit a Profiling job to Yarn. If the initial upload of the spark libraries to the cluster takes longer than 20 seconds, the spark-job-service times out and returns an error to the UI. However, the libraries do finish uploading successfully to the cluster.

The library upload is a one-time operation for each install/upgrade. Despite the error, the libraries are uploaded successfully the first time. This error does not affect subsequent profiler job runs.

Solution:

Try running the job again.

Problem - Spark fails with ClassNotFound error in Spark job service log

When executing a job through Spark, the job may fail with the following error in the spark-job-service.log file:

Code Block
java.lang.ClassNotFoundException: com.trifacta.jobserver.profiler.Profiler

By default, the Spark job service attempts to optimize the distribution of the Spark JAR files across the cluster. This optimization involves a one-time upload of the spark-assembly and profiler-bundle JAR files to HDFS. Then, YARN distributes these JARs to the worker nodes of the cluster, where they are cached for future use.

In some cases, the localized JAR files can get corrupted on the worker nodes, causing this ClassNotFound error to occur.

Solution:

The solution is to disable this optimization through platform configuration.

Steps:

  1. D s config
  2. Locate the spark-job-service configuration node.
  3. Set the following property to false:

    Code Block
    "spark-job-service.optimizeLocalization" : true


  4. Save your changes and restart the platform.

Problem - Spark fails with PermGen OutOfMemory error in the Spark job service log

When executing a job through Spark, the job may fail with the following error in the spark-job-service.log file:

 

Code Block
Exception in thread "LeaseRenewer:trifacta@nameservice1" java.lang.OutOfMemoryError: PermGen space

Solution:

The solution is to configure the PermGen space for the Spark driver:

  1. D s config
  2. Locate the spark configuration node.
  3. Set the following property to the given value:

    Code Block
    "spark.props.spark.driver.extraJavaOptions" : "-XX:MaxPermSize=1024m -XX:PermSize=256m",


  4. Save your changes and restart the platform.

Problem - Spark fails with "token (HDFS_DELEGATION_TOKEN token) can't be found in cache" error in the Spark job service log on a Kerberized cluster when Impersonation is enabled

When executing a job through Spark, the job may fail with the following error in the spark-job-service.log file:

 

Code Block
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): token (HDFS_DELEGATION_TOKEN token x for trifacta) can't be found in cache

 

Solution:

The solution is to set Spark impersonation to true:

  1. D s config
  2. Locate the spark-job-service configuration node.
  3. Set the following property to the given value:

    Code Block
    "spark-job-service.sparkImpersonationOn" : true,


  4. Save your changes and restart the platform.

Problem - Spark fails with "job aborted due to stage failure" error

Issue:

Spark fails with an error similar to the following in the spark-job-service.log:

Code Block
"Job aborted due to stage failure: Total size of serialized results of 208 tasks (1025.4 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)"

Explanation:

The spark.driver.maxResultSize parameter determines the limit of the total size of serialized results of all partitions for each Spark action (e.g. collect). If the total size of the serialized results exceeds this limit, the job is aborted.

To enable serialized results of unlimited size, set this parameter to zero (0).

Solution:


  1. D s config
  2. To the spark.props section of the file, remove the size limit by setting this value to zero:

    Code Block
    "spark.driver.maxResultSize": "0"


  3. Save your changes and restart the platform.

Problem - Spark job fails with "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState'" error

Issue:

Spark job fails with an error similar to the following in either the spark-job.log or the yarn-app.log file:

Code Block
"java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState'"

Explanation:

By default, the Spark running environment attempts to connect to Hive when it creates the Spark Context. This connection attempt may fail if Hive connectivity (in conf/hadoop-site/hive-site.xml) is not configured correctly on the

D s node
.

Solution:

This issue can be fixed by configuring Hive connectivity on the edge node.

If Hive connectivity is not required, the Spark running environment's default behavior can be changed as follows:

  1. D s config
  2. In the spark-job-service section of the file, disable Hive connectivity by setting this value to false:

    Code Block
    "spark-job-service.enableHiveSupport": false


  3. Save your changes and restart the platform.

Problem - Spark job fails with "No Avro files found. Hadoop option "avro.mapred.ignore.inputs.without.extension" is set to true. Do all input files have ".avro" extension?" error

Issue:

Spark job fails with an error similar to the following in either the spark-job.log or the yarn-app.log file:

Code Block
java.io.FileNotFoundException: No Avro files found. Hadoop option "avro.mapred.ignore.inputs.without.extension" is set to true. Do all input files have ".avro" extension?

Explanation:

By default, Spark-Avro requires all Avro files to have the .avro extension, which includes all part files in a source directory. Spark-Avro ignores any files that do not have the .avro extension.

If a directory contains part files without an extension (e.g. part-00001, part-00002), Spark-Avro ignores these files and throws the "No Avro files found" error.

Solution:

This issue can be fixed by setting the spark.hadoop.avro.mapred.ignore.inputs.without.extension property to false:

  1. D s config
  2. To the spark.props section of the file, add the following setting if it does not already exist. Set its value to false:

    Code Block
    "spark.hadoop.avro.mapred.ignore.inputs.without.extension": "false"


  3. Save your changes and restart the platform.

Problem - Spark job fails in the platform but successfully executes on Spark

Issue:

After you have submitted a job to be executed on the Spark cluster, the job may fail in the

D s platform
after 30 minutes. However, on the busy cluster, the job remains enqueued and is eventually collected and executed. Since the job was canceled in the platform, results are not returned.

Explanation:

This issue is caused by a timeout setting for Batch Job Runner, which cancels management of jobs after a predefined number of seconds. Since these jobs are already queued on the cluster, they may be executed independent of the platform.

Solution:

This issue can be fixed by increasing the Batch Job Runner Spark timeout setting:

  1. D s config
  2. Locate the following property. By default, it is set to 172800, which is 48 hours:

    Code Block
    "batchserver.spark.progressTimeoutSeconds": 172800,


  3. If your value is lower than the default, you can increase this value high enough for your job to succeed.

  4. Save your changes and restart the platform.
  5. Re-run the job.