...
Excerpt | |
---|---|
This section provides high-level information on how to configure the
|
AWS Databricks is a unified data analytics platform that has been optimized for use on the AWS infrastructure.
- For more information, see https://databricks.com/aws.
- For documentation on AWS Databricks, see https://databricks.com/documentation.
Additional Databricks features supported by the platform:
- Credential passthrough (AWS Databricks only): https://docs.databricks.com/security/credential-passthrough/iam-passthrough.html
- Table access control: https://docs.databricks.com/security/access-control/table-acls/object-privileges.html
Prerequisites
- The
must be installed in a customer-managed AWS environment.D s platform - The base storage layer must be set to S3. For more information, see Set Base Storage Layer.
- AWS Secrets Manager is required for AWS Databricks use. For more information, see Configure for AWS Secrets Manager.
...
D s config method ws Locate the following parameter, which enables
for smaller job execution. Set it toD s photon Enabled
:Code Block Photon execution
- You do not need to save to enable the above configuration change.
D s config Locate the following parameters. Set them to the values listed below, which enables AWS Databricks (small to extra-large jobs) running environments:
Code Block "webapp.runInDatabricks": true, "webapp.runWithSparkSubmit": false, "webapp.runInDataflow": false,
- Do not save your changes until you have completed the following configuration section.
Configure
Configure cluster mode
When a user submits a job, the
D s product |
---|
For more information on job clusters, see https://docs.databricks.com/clusters/configure.html .
The job clusters automatically terminate after the job is completed. A new cluster is automatically created when the user next requests access to AWS Databricks access.
Cluster Mode | Description | ||
---|---|---|---|
USER | When a user submits a job,
Reset to JOB mode to run jobs in AWS Databricks. | ||
JOB | When a user submits a job,
|
...
Optionally, you can configure the
D s platform |
---|
...
Code Block |
---|
{ "autoscale.max_workers": { "type": "fixed", "value": 3, "hidden": true }, "autoscale.min_workers": { "type": "fixed", "value": 1, "hidden": true }, "aws_attributes.instance_profile_arn": { "type": "fixed", "value": "arn:aws:iam::9999999999:instance-profile/SOME_POLICY", "hidden": false }, "enable_local_disk_encryption": { "type": "fixed", "value": false }, "instance_pool_id": { "type": "fixed", "value": "SOME_POOL", "hidden": true }, "driver_instance_pool_id": { "type": "fixed", "value": "SOME_POOL", "hidden": true }, "autotermination_minutes": { "type": "fixed", "value": 10, "hidden": true }, } |
Configure Instance Profiles in AWS Databricks
EC2 instances can be configured with permissions to access AWS resources like S3 by attaching an IAM instance profile. Similarly, instance profiles can be attached to EC2 instances for use with AWS Databricks clusters. D s platform
Info |
---|
NOTE: You must register the instance profiles in the Databricks workspace, or your Databricks clusters reject the instance profile ARNs and display an error. For more information, see https://docs.databricks.com/administration-guide/cloud-configurations/aws/instance-profiles.html#step-5-add-the-instance-profile-to-databricks. |
To configure the instance profile for AWS Databricks, you must provide an IAM instance profile ARN in databricks.awsAttributes.
instanceProfileArn
parameter.
Info |
---|
NOTE: For AWS Databricks, you can configure the instance profile value in |
aws.credentialProvider | AWS Databricks permissions | ||||
---|---|---|---|---|---|
instance |
| ||||
temporary |
| ||||
default | n/a |
Info |
---|
NOTE: If the
|
For more information, see Configure for AWS Authentication .
Configure instance pooling
...
Instance pooling for worker nodes
Pre-requisitesPrerequisites:
- All cluster nodes used by the
are taken from the pool. If the pool has an insufficient number of nodes, cluster creation fails.D s platform - Each user must have access to the pool and must have at least the
ATTACH_TO
permission. - Each user must have a personal access token from the same AWS Databricks workspace. See Configure personal access token below.
...
Following is the list of parameters that can be reviewed or modified based on your requirements:
Optional Parameters
Parameter | Description | Value | ||
---|---|---|---|---|
databricks.awsAttributes.firstOnDemandInstances | Number of initial cluster nodes to be placed on on-demand instances. The remainder is placed on availability instances | Default: 1 | ||
databricks.awsAttributes.availability | Availability type used for all subsequent nodes past the firstOnDemandInstances. | Default: SPOT_WITH_FALLBACK | ||
databricks.awsAttributes.availabilityZone | Identifier for the availability zone/datacenter in which the cluster resides. The provided availability zone must be in the same region as the Databricks deployment. | |||
databricks.awsAttributes.spotBidPricePercent | The max price for AWS spot instances, as a percentage of the corresponding instance type's on-demand price. When spot instances are requested for this cluster, only spot instances whose max price percentage matches this field will be considered. | Default: 100 | ||
databricks.awsAttributes.ebsVolume | The type of EBS volumes that will be launched with this cluster. | Default: None | ||
databricks.awsAttributes.instanceProfileArn | EC2 instance profile ARN for the cluster nodes. This is only used when AWS credential provider is set to temporary/instance. The instance profile must have previously been added to the Databricks environment by an account administrator. | For more information, see Configure for AWS Authentication. | ||
databricks.clusterMode | Determines the cluster mode for running a Databricks job. | Default: JOB | ||
feature.parameterization.matchLimitOnSampling.databricksSpark | Maximum number of parameterized source files that are permitted for matching in a single dataset with parameters. | Default: 0 | ||
databricks.workerNodeType | Type of node to use for the AWS Databricks Workers/Executors. There are 1 or more Worker nodes per cluster. | Default: | ||
databricks.sparkVersion | AWS Databricks runtime version which also references the appropriate version of Spark. | Depending on your version of AWS Databricks, please set this property according to the following:
Please do not use other values. | ||
databricks.minWorkers | Initial number of Worker nodes in the cluster, and also the minimum number of Worker nodes that the cluster can scale down to during auto-scale-down. | Minimum value: Increasing this value can increase compute costs. | ||
databricks.maxWorkers | Maximum number of Worker nodes the cluster can create during auto scaling. | Minimum value: Not less than Increasing this value can increase compute costs. | ||
databricks.poolId | If you have enabled instance pooling in AWS Databricks, you can specify the pool identifier here. |
| ||
databricks.poolName | If you have enabled instance pooling in AWS Databricks, you can specify the pool name here. | See previous.
| ||
databricks.driverNodeType | Type of node to use for the AWS Databricks Driver. There is only one Driver node per cluster. | Default: For more information, see the sizing guide for Databricks.
| ||
databricks. driverPoolId | If you have enabled instance pooling in AWS Databricks, you can specify the driver node pool identifier here. For more information, see Configure instance pooling below. |
| ||
databricks.driverPoolName | If you have enabled instance pooling in AWS Databricks, you can specify the driver node pool name here. For more information, see Configure instance pooling below. | See previous.
| ||
databricks.logsDestination | DBFS location that cluster logs will be sent to every 5 minutes | Leave this value as /trifacta/logs . | ||
databricks.enableAutotermination | Set to true to enable auto-termination of a user cluster after N minutes of idle time, where N is the value of the autoterminationMinutes property. | Unless otherwise required, leave this value as true . | ||
databricks.clusterStatePollerDelayInSeconds | Number of seconds to wait between polls for AWS Databricks cluster status when a cluster is starting up | |||
databricks.clusterStartupWaitTimeInMinutes | Maximum time in minutes to wait for a Cluster to get to Running state before aborting and failing an AWS Databricks job. | Default: 60 | ||
databricks.clusterLogSyncWaitTimeInMinutes | Maximum time in minutes to wait for a Cluster to complete syncing its logs to DBFS before giving up on pulling the cluster logs to the
| Set this to 0 to disable cluster log pulls. | ||
databricks.clusterLogSyncPollerDelayInSeconds | Number of seconds to wait between polls for a Databricks cluster to sync its logs to DBFS after job completion. | Default: 20 | ||
databricks.autoterminationMinutes | Idle time in minutes before a user cluster will auto-terminate. | Do not set this value to less than the cluster startup wait time value. | ||
databricks.maxAPICallRetries | Maximum number of retries to perform in case of 429 error code response | Default: 5. For more information, see Configure Maximum Retries for REST API section below. | ||
databricks.enableLocalDiskEncryption | Enables encryption of data like shuffle data that is temporarily stored on cluster's local disk. | - | ||
databricks.patCacheTTLInMinutes | Lifespan in minutes for the Databricks personal access token in-memory cache | Default: 10 | ||
spark.useVendorSparkLibraries | When |
|
Configure Databricks Job Management
...
Optionally, you can provide to the
D s platform |
---|
Pre-requisitesPrerequisites:
Info |
---|
NOTE: Any shared cluster must be maintained by the customer. |
...
Configure AWS Databricks workspace overrides
A single AWS Databricks account can have access to multiple Databricks workspaces. You can create more than one workspace by using Account API if you are account is on the E2 version of the platform or on a selected custom plan that allows multiple workspaces per account.
For more information, see https://docs.databricks.com/administration-guide/account-api/new-workspace.html
Each workspace has a unique deployment name associated with it that defines the workspace URL. For example: https://<deployment-name>. cloud.databricks.com
.
Info | |
---|---|
NOTE:
For more information, see Databricks Settings Page. For more information, see Configure Platform section above. |
...
I ndividual users can specify the name of the cluster to which they are permissioned to access Databricks Tables. This cluster can also be shared among users. For more information, see Databricks Settings Page.
Configure maximum retries for REST API
There is a limit of 30 requests per second per workspace on the Databricks REST APIs. If this limit is reached, then a HTTP status code 429 error is returned, indicating that rate limiting is being applied by the server. By default, the
D s platform |
---|
5
times and then fails the job if the request is not accepted. If you want to change the number of retries, change the value for the databricks.maxAPICallRetries
flag.
Value | Description | ||
---|---|---|---|
5 | (default) When a request is submitted through the AWS Databricks REST APIs, up to
| ||
0 | When an API call fails, the request fails. As the number of concurrent jobs increases, more jobs may fail.
| ||
5+ | Increasing this setting above the default value may result in more requests eventually getting processed. However, increasing the value may consume additional system resources in a high concurrency environment and jobs might take longer to run due to exponentially increasing waiting time. |
...