This section describes how to configure the  to integrate with Databricks hosted in Azure. 

NOTE: You cannot integrate with existing Azure Databricks clusters.



NOTE: If you are using Azure AD to integrate with an Azure Databricks cluster, the Azure AD secret value stored in azure.secret must begin with an alphanumeric character. This is a known issue.

Job counts

By default, the number of jobs permitted on an Azure Databricks cluster is set to 1000.

NOTE: To enable retrieval and auditing of job information after a job has been completed, the does not delete jobs from the cluster. As a result, jobs can accumulate over time to exceeded the number of jobs permitted on the cluster. You should periodically delete jobs on your Azure Databricks cluster to prevent reaching these limits and receiving a Quota for number of jobs has been reached limit.

For more information, see

Create Cluster

NOTE: Integration with pre-existing Azure Databricks clusters is not supported.

When a user first requests access to Azure Databricks, a new Azure Databricks cluster is created for the user. Access can include a request to run a job or to browse Databricks Tables. Cluster creation may take a few minutes.

A new cluster is also created when a user launches a job after:

A user's cluster automatically terminates after a configurable time period. A new cluster is automatically created when the user next requests access to Azure Databricks access. See "Configure Platform" below.


To enable Azure Databricks, please perform the following configuration changes. 


  1. Locate the following parameters. Set them to the values listed below, which enable the  (smaller jobs) and Azure Databricks (small to extra-large jobs) running environments:

    "webapp.runInTrifactaServer": true,
    "webapp.runInDatabricks": true,
    "webapp.runWithSparkSubmit": false,
    "webapp.runinEMR": false,
    "webapp.runInDataflow": false,

  2. Do not save your changes until you have completed the following configuration section.


Configure Platform

Please review and modify the following configuration settings.

NOTE: When you have finished modifying these settings, save them and restart the platform to apply.

feature.parameterization.maxNumberOfFilesForExecution.databricksSparkMaximum number of parameterized source files that are permitted to be executed as part of an Azure Databricks job.
feature.parameterization.matchLimitOnSampling.databricksSparkMaximum number of parameterized source files that are permitted for matching in a single dataset with parameters.
databricks.workerNodeTypeType of node to use for the Azure Databricks Workers/Executors. There are 1 or more Worker nodes per cluster.

Default: Standard_D3_v2

NOTE: This property is unused when instance pooling is enabled. For more information, see Configure instance pooling below.

For more information, see the sizing guide for Azure Databricks.

databricks.sparkVersionAzure Databricks cluster version which also includes the Spark Version.

NOTE: Please verify that this value is set to the following: 5.5.x-scala2.11.

Please do not use other values. For more information, see Configure for Spark.

databricks.serviceUrlURL to the Azure Databricks Service where Spark jobs will be run (Example:
databricks.minWorkersInitial number of Worker nodes in the cluster, and also the minimum number of Worker nodes that the cluster can scale down to during auto-scale-down

Minimum value: 1

Increasing this value can increase compute costs.

databricks.maxWorkersMaximum number of Worker nodes the cluster can create during auto scaling

Minimum value: Not less than databricks.minWorkers.

Increasing this value can increase compute costs.


If you have enabled instance pooling in Azure Databricks, you can specify the pool identifier here. For more information, see Configure instance pooling below.


Type of node to use for the Azure Databricks Driver. There is only 1 Driver node per cluster.

Default: Standard_D3_v2

For more information, see the sizing guide for Databricks.

NOTE: This property is unused when instance pooling is enabled. For more information, see Configure instance pooling below.

databricks.logsDestinationDBFS location that cluster logs will be sent to every 5 minutesLeave this value as /trifacta/logs.
databricks.enableAutoterminationSet to true to enable auto-termination of a user cluster after N minutes of idle time, where N is the value of the autoterminationMinutes property.Unless otherwise required, leave this value as true.
databricks.clusterStatePollerDelayInSecondsNumber of seconds to wait between polls for Azure Databricks cluster status when a cluster is starting up
databricks.clusterStartupWaitTimeInMinutesMaximum time in minutes to wait for a Cluster to get to Running state before aborting and failing an Azure Databricks job

Maximum time in minutes to wait for a Cluster to complete syncing its logs to DBFS before giving up on pulling the cluster logs to the .

Set this to 0 to disable cluster log pulls.
databricks.clusterLogSyncPollerDelayInSecondsNumber of seconds to wait between polls for a Databricks cluster to sync its logs to DBFS after job completion
databricks.autoterminationMinutesIdle time in minutes before a user cluster will auto-terminate.Do not set this value to less than the cluster startup wait time value.

When true, the platform bypasses shipping its installed Spark libraries to the cluster with each job's execution.

Default is false.

NOTE: Set this value to true.

Configure instance pooling

Instance pooling reduces cluster node spin-up time by maintaining a set of idle and ready instances. The  can be configured to leverage instance pooling on the Azure Databricks cluster.


To enable:

  1. Acquire your pool identifier from Azure Databricks.
  2. Set the following parameter to the Azure Databricks pool identifier:

    "databricks.poolId": "<my_pool_id>",

  3. Save your changes and restart the platform.

NOTE: When instance pooling is enabled, the following parameters are not used:



For more information, see

Configure personal access token

Each user must insert a Databricks Personal Access Token to access Databricks resources. For more information, see Databricks Personal Access Token Page.

Additional Configuration

Enable SSO for Azure Databricks

To enable SSO authentication with Azure Databricks, you enable SSO integration with Azure AD. For more information, see Configure SSO for Azure AD.

Enable Azure Managed Identity access

For enhanced security, you can configure the to use an Azure Managed Identity. When this feature is enabled, the platform queries the Key Vault for the secret holding the applicationId and secret to the service principal that provides access to the Azure services.

NOTE: This feature is supported for Azure Databricks only.

NOTE: Your Azure Key Vault must already be configured, and the applicationId and secret must be available in the Key Vault. See Configure for Azure.

To enable, the following parameters for the  must be specified.

azure.managedIdentities.enabledSet to true to enable use of Azure managed identities.
azure.managedIdentities.keyVaultApplicationidSecretNameSpecify the name of the Azure Key Vault secret that holds the service principal Application Id.
azure.managedIdentities.keyVaultApplicationSecretSecretNameSpecify the name of the Key Vault secret that holds the service principal secret.

Save your changes.

Pass additional Spark properties

As needed, you can pass additional properties to the Spark running environment through the spark.props configuration area.

NOTE: These properties are passed to Spark for all jobs.


  1. Search for the following property: spark.props.
  2. Insert new Spark properties. For example, you can specify the spark.props.spark.executor.memory property, which changes the memory allocated to the Spark executor on each node by using the following in the spark.props area:

    "spark": {
      "props": {
        "spark.executor.memory": "6GB"

  3. Save your changes and restart the platform.

For more information on modifying these settings, see Configure for Spark.


Run job from application

When the above configuration has been completed, you can select the running environment through the application. See Run Job Page.

Run job via API

You can use API calls to execute jobs.

Please make sure that the request body contains the following:

    "execution": "databricksSpark",

For more information, see API JobGroups Create v4.


Spark job on Azure Databricks fails with "Invalid spark version" error

When running a job using Spark on Azure Databricks, the job may fail with the above invalid version error. In this case, the Databricks version of Spark has been deprecated.


Since an Azure Databricks cluster is created for each user, the solution is to identify the cluster version to use, configure the platform to use it, and then restart the platform.

  1. Acquire the value for databricks.sparkVersion.
  2. In Azure Databricks, compare your value to the list of supported Azure Databricks version. If your version is unsupported, identify a new version to use. 

    NOTE: Please make note of the version of Spark supported for the version of Azure Databricks that you have chosen.

  3. In the  configuration:
    1. Set databricks.sparkVersion to the new version to use.
    2. Set spark.version to the appropriate version of Spark to use.
  4. Restart the .
  5. The platform is restarted. A new Azure Databricks cluster is created for each user using the specified values, when the user runs a job.