Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The job clusters automatically terminate after the job is completed. A new cluster is automatically created when the user next requests access to AWS Databricks access. 

Cluster ModeDescription
USER

When a user submits a job,

D s product
 creates a new cluster and persists the cluster ID in 
D s product
 metadata for the user if the cluster does not exist or invalid. If the user already has an existing interactive valid cluster, then the existing cluster is reused when submitting the job.

Reset to JOB mode to run jobs in AWS Databricks.

JOB

When a user submits a job,

D s product
provides all the cluster specifications in the Databricks API. Databricks creates a cluster only for this job and terminates it as soon as the job completes.

Default cluster mode to run jobs in AWS Databricks.


Configure Instance Profiles in AWS Databricks

...

Info

NOTE: For AWS Databricks, you can configure the instance profile value in databricks.awsAttributes.instanceProfileArn, only when the aws.credentialProvider is set to instance or temporary.


aws.credentialProviderAWS Databricks permissions
instance

D s platform
 or Databricks jobs gets all permissions directly from the instance profile.

temporary

D s platform
 or Databricks jobs use temporary credentials that are issued based on system or user IAM roles.

Info

NOTE: The instance profile must have policies that allow  

D s platform
 or Databricks to assume those roles.


defaultn/a


Info

NOTE: If the aws.credentialProvider is set to temporary or instance while using AWS Databricks:

  • databricks.awsAttributes.instanceProfileArn must be set to a valid value for Databricks jobs to run successfully.
  • aws.ec2InstanceRoleForAssumeRole flag is ignored for Databricks jobs.

...

For more information, see https://docs.azuredatabricksdatabricks.netcom/clusters/instance-pools/indexconfigure.html.

Instance pooling for worker nodes

...

 Following is the list of parameters that have to be set to integrate the AWS Databricks with 

D s platform
:

Required Parameters

ParameterDescriptionValue

databricks.serviceUrl

URL to the AWS Databricks Service where Spark jobs will be run-
metadata.cloud

Must be set to aws and should not changed to any other value while using AWS Databricks

Default: aws

Following is the list of parameters that can be reviewed or modified based on your requirements:

Optional Parameters

ParameterDescriptionValue

databricks.awsAttributes.firstOnDemandInstances

Number of initial cluster nodes to be placed on on-demand instances. The remainder is placed on availability instances

Default: 1

databricks.awsAttributes.availability

Availability type used for all subsequent nodes past the firstOnDemandInstances.

Default: SPOT_WITH_FALLBACK

databricks.awsAttributes.availabilityZone

Identifier for the availability zone/datacenter in which the cluster resides. The provided availability zone must be in the same region as the Databricks deployment.


databricks.awsAttributes.spotBidPricePercent

The max price for AWS spot instances, as a percentage of the corresponding instance type's on-demand price. When spot instances are requested for this cluster, only spot instances whose max price percentage matches this field will be considered.

Default: 100

databricks.awsAttributes.ebsVolume

The type of EBS volumes that will be launched with this cluster.

Default: None

databricks.awsAttributes.instanceProfileArn

EC2 instance profile ARN for the cluster nodes. This is only used when AWS credential provider is set to temporary/instance. The instance profile must have previously been added to the Databricks environment by an account administrator.

For more information, see Configure for AWS Authentication.
databricks.clusterMode

Determines the cluster mode for running a Databricks job.

Default: JOB
feature.parameterization.matchLimitOnSampling.databricksSparkMaximum number of parameterized source files that are permitted for matching in a single dataset with parameters.Default: 0
databricks.workerNodeTypeType of node to use for the AWS Databricks Workers/Executors. There are 1 or more Worker nodes per cluster.

Default: i3.xlarge



databricks.sparkVersionAWS Databricks runtime version which also references the appropriate version of Spark.

Depending on your version of AWS Databricks, please set this property according to the following:

  • AWS Databricks 7.3: 7.3.x-scala2.12

    Info

    NOTE: Except for the above version, AWS Databricks 7.x is not supported.


  • AWS Databricks 5.5 LTR: 5.5.x-scala2.11

Please do not use other values.

databricks.minWorkersInitial number of Worker nodes in the cluster, and also the minimum number of Worker nodes that the cluster can scale down to during auto-scale-down.

Minimum value: 1

Increasing this value can increase compute costs.

databricks.maxWorkersMaximum number of Worker nodes the cluster can create during auto scaling.

Minimum value: Not less than databricks.minWorkers.

Increasing this value can increase compute costs.

databricks.poolId

If you have enabled instance pooling in AWS Databricks, you can specify the pool identifier here.


Info

NOTE: If both poolId and poolName are specified, poolId is used first. If that fails to find a matching identifier, then the poolName value is checked.


databricks.poolNameIf you have enabled instance pooling in AWS Databricks, you can specify the pool name here.

See previous.

Tip

Tip: If you specify a poolName value only, then you can use the instance pools with the same poolName available across multiple Databricks workspaces when you create a new cluster.


databricks.driverNodeType

Type of node to use for the AWS Databricks Driver. There is only one Driver node per cluster.

Default: i3.xlarge

For more information, see the sizing guide for Databricks.

Info

NOTE: This property is unused when instance pooling is enabled. For more information, see Configure instance pooling below.


databricks.driverPoolIdIf you have enabled instance pooling in AWS Databricks, you can specify the driver node pool identifier here. For more information, see Configure instance pooling below.


Info

NOTE: If both driverPoolId and driverPoolName are specified, driverPoolId is used first. If that fails to find a matching identifier, then the driverPoolName value is checked.


databricks.driverPoolNameIf you have enabled instance pooling in AWS Databricks, you can specify the driver node pool name here. For more information, see Configure instance pooling below.

See previous.

Tip

Tip: If you specify a driverPoolName value only, then you can use the instance pools with the same driverPoolName available across multiple Databricks workspaces when you create a new cluster.


databricks.logsDestinationDBFS location that cluster logs will be sent to every 5 minutesLeave this value as /trifacta/logs.
databricks.enableAutoterminationSet to true to enable auto-termination of a user cluster after N minutes of idle time, where N is the value of the autoterminationMinutes property.Unless otherwise required, leave this value as true.
databricks.clusterStatePollerDelayInSecondsNumber of seconds to wait between polls for AWS Databricks cluster status when a cluster is starting up
databricks.clusterStartupWaitTimeInMinutesMaximum time in minutes to wait for a Cluster to get to Running state before aborting and failing an AWS Databricks job. Default: 60
databricks.clusterLogSyncWaitTimeInMinutes

Maximum time in minutes to wait for a Cluster to complete syncing its logs to DBFS before giving up on pulling the cluster logs to the

D s node
.

Set this to 0 to disable cluster log pulls.
databricks.clusterLogSyncPollerDelayInSecondsNumber of seconds to wait between polls for a Databricks cluster to sync its logs to DBFS after job completion. Default: 20
databricks.autoterminationMinutesIdle time in minutes before a user cluster will auto-terminate.Do not set this value to less than the cluster startup wait time value.
databricks.maxAPICallRetriesMaximum number of retries to perform in case of 429 error code responseDefault: 5. For more information, see Configure Maximum Retries for REST API section below.
databricks.enableLocalDiskEncryption

Enables encryption of data like shuffle data that is temporarily stored on cluster's local disk.

-
databricks.patCacheTTLInMinutesLifespan in minutes for the Databricks personal access token in-memory cacheDefault: 10
spark.useVendorSparkLibraries

When true, the platform bypasses shipping its installed Spark libraries to the cluster with each job's execution.


Info

NOTE: This setting is ignored. The vendor Spark libraries are always used for AWS Databricks.


Configure Databricks Job Management

...

  1. D s config
    methodws
  2. Locate the following property and set it to one of the values listed below:

    Code Block
    Databricks Job Management


    Property ValueDescription
    Never Delete(default) Job definitions are never deleted from the AWS Databricks cluster.
    Always DeleteThe AWS Databricks job definition is deleted during the clean-up phase, which occurs after a job completes.
    Delete Successful OnlyWhen a job completes successfully, the AWS Databricks job definition is deleted during the clean-up phase. Failed or canceled jobs are not deleted, which allows you to debug as needed.
    Skip Job Creation

    For jobs that are to be executed only one time, the

    D s platform
    can be configured to use a different mechanism for submitting the job. When this option is enabled, the
    D s platform
    submits jobs using the run-submit API, instead of the run-now API. The run-submit API does not create an AWS Databricks job definition. Therefore the submitted job does not count toward the enforced job limit.

    DefaultInherits the default system-wide setting.


  3. When this feature is enabled, the platform falls back to use the runs/submit API as a fallback when the job limit for the Databricks workspace has been reached:

    Code Block
    Databricks Job Runs Submit Fallback


  4. Save your changes and restart the platform.

...

  1. D s config
  2. In the 
    D s webapp
    , select User menu > Admin console > Admin settings.
  3. Locate the following settings and set their values accordingly:

    SettingDescription

    databricks.userClusterThrottling.enabled

    When set to true, job throttling per Databricks cluster is enabled. Please specify the following settings.

    databricks.userClusterthrottling.maxTokensAllottedPerUserCluster

    Set this value to the maximum number of concurrent jobs that can run on one user cluster. Default value is 20.

    databricks.userClusterthrottling.tokenExpiryInMinutes

    The time in minutes after which tokens reserved by a job are revoked, irrespective of the job status. If a job is in progress and this limit is reached, then the Databricks token is expired, and the token is revoked under the assumption that it is stale. Default value is 120 (2 hours).

    Tip

    Tip: Set this value to 0 to prevent token expiration. However, this setting is not recommended, as jobs can remain in the queue indefinitely.


    jobMonitoring.queuedJobTimeoutMinutesThe maximum time in minutes in which a job is permitted to remain in the queue for a slot on Databricks cluster. If this limit is reached, the job is marked as failed.
    batch-job-runner.cleanup.enabled

    When set to true, the Batch Job Runner service is permitted to clean up throttling tokens and job-level personal access tokens.

    Tip

    Tip: Unless you have reason to do otherwise, you should leave this setting to true.



  4. Save your changes and restart the platform.

...

If you want to change the number of retries, change the value for the databricks.maxAPICallRetries  flag.  

ValueDescription
5

(default) When a request is submitted through the AWS Databricks REST APIs, up to 5 retries can be performed in the case of failures.

  • The waiting period increases exponentially for every retry. For example, for the 1st retry, the wait time is 10 seconds, 20 seconds for the next retry, 40 seconds for the third retry and so on.
  • You can set the values accordingly based on number of minutes /seconds you want to try.
0

When an API call fails, the request fails. As the number of concurrent jobs increases, more jobs may fail.

Info

NOTE: This setting is not recommended.


5+Increasing this setting above the default value may result in more requests eventually getting processed. However, increasing the value may consume additional system resources in a high concurrency environment and jobs might take longer to run due to exponentially increasing waiting time.

Use

Run Job From Application

...