Job Types

The following types of jobs can be executed in :

For more information, see Run Job Page.

Tip: Information on failed plan executions is contained in the orchestration-service.log file, which can be acquired in the support bundle. For more information, see Support Bundle Contents.

Identify Job Failures

When a job fails to execute, a failure message appears in following locations:

The following is an example from the Jobs page:

Publish job failed

In the above example, the Transform and Profile jobs completed, but the Publish job failed. In this case, the results exist and, if the source of the problem is diagnosed, they can be published separately.From the job's context menu, select Download Logs. You can download the jobs logs to look for reasons for the failure. See Review Logs below.

Invalid file paths

When your job uses files as inputs or outputs, you may receive invalid file path errors. Depending on the backend datastore, these can be one of the following:

Jobs that Hang

In some cases, a job may stay in a pending state indefinitely. Typically, these errors are related to a failure of the job tracking service. You can try to the following:

Spark Job Error Messages

The following error messages may appear in the  when a Spark job fails to execute.

"Aggregate too many columns" error

Your job could not be completed due to one or more Pivot, Window or other Aggregation recipe steps having too many aggregate functions in the Values parameter.

Solution: Please split these aggregates across multiple Aggregation steps.

"Binary sort" error

Sorting a nested column such as an array or map is not supported.

Codegen error

Your job could not be completed due to the complexity of your recipe.


"Colon in path" error

Your job references one or more invalid file paths. File and folder names cannot contain the colon character.

"Invalid input path" error

Your job references one or more invalid file paths. File names cannot begin with characters like dot or underscore.

"Invalid union" error

Union operations can only be performed on tables with compatible column types.

Tip: Edit the union in question. Verify that the columns are properly aligned and have consistent data types. For more information, see Union Page.

"Job service unreachable" error

There was an error communicating with the Spark Job Service.

Tip: An administrator can review the contents of the spark-job-service.log file for details. See System Services and Logs.

"Oom" error

When you encounter out of memory errors related to job execution, you should review the following general items related to your flow.

General Tips:

"Path not found during execution" error

One or more datasources referenced by your job no longer exist.

Tip: Review your flow and all of its upstream dependencies to locate the broken datasource. Reference errors for upstream dependencies may be visible in downstream recipes.

"Too many columns" error

Your job could not be completed due to one or more datasets containing a large number of columns.

Tip: A general rule of thumb is to avoid over 1000 columns in your dataset. Depending on your environment, you may experience performance issues and job failures on narrower datasets.

"Version mismatch" error

The version of Spark installed on your Hadoop cluster does not match the version of Spark that is configured to use.

Tip: For more information on the appropriate version to configure for the product, see Configure for Spark.

Databricks Job Errors

The following error messages are specific to Spark errors encountered when running jobs on Databricks.

NOTE: When a Databricks job fails, the failure is immediately reported in the . Collection of the job log files from Databricks occurs afterward in the background.

Tip: A platform administrator may be able to download additional logs for help in diagnosing job errors.

"Runtime cluster" error

There was an error running your job.

"Staging cluster" error

There was an error launching your job.

Try Other Job Options

You can try to re-execute the job using different options.


Tip: The  running environment is not suitable for jobs on large datasets. You should accept the running environment recommended in the Run Job page.

Review Logs

Job logs

In the listing for the job on the Jobs page, click Download Logs to send the job-related logs to your local desktop.

NOTE: If encryption has been enabled for log downloads, you must be an administrator to see a clear-text version of the jobs listed below. For more information, see Configure Support Bundling.

When you unzip the ZIP file, you should see a numbered folder with the internal identifier for your job on it. If you executed a transform job and a profile job, the ZIP contains two numbered folders with the lower number representing the transform job.

job.log. Review this log file for information on how the job was handled by the application.

Tip: Search this log file for error.

Support bundle: If support bundling has been enabled in your environment, the support-bundle folder contains a set of configuration and log files that can be useful for debugging job failures.

Tip: Please include this bundle with any request for assistance to .

For more information on configuring the support bundle, see Configure Support Bundling.

For more information on the bundle contents, see Support Bundle Contents.

Support logs

For support use, the most meaningful logs and configuration files can be downloaded from the application. Select Help menu > Download logs.

NOTE: If you are submitting an issue to , please download these files through the application.

For more information, see Download Logs Dialog.

The admin version of this dialog enables downloading logs by timeframe, job ID, or session ID. For more information, see Admin Download Logs Dialog.


NOTE: You must be an administrator to access these logs. These logs are included when an administrator downloads logs for a failed job. See above.

On the , these logs are located in the following directory:


This directory contains the following logs:

Hadoop logs

In addition to these logs, you can also use the Hadoop job logs to troubleshoot job failures.

Contact Support

If you are unable to diagnose your job failure, please contact 

NOTE: When you contact support about a job failure, please be sure to download and include the entire zip file, your recipe, and (if possible) your dataset.