Problem
You’re working in an all-purpose compute with dedicated (formerly single user) access mode. The access mode is assigned to a service principal where you have the Service Principal User role. When you attempt to run an interactive workload using this compute, you receive the following error message.
Single-user check failed: user '<your-user>' attempted to run a command on single-user cluster <cluster-id>, but the single user of this cluster is '<service-principal-application-id>'
Cause
Dedicated access mode assigned to a service principal can only be used for workflows (jobs, tasks, or pipelines) set up to run as the service principal.
Dedicated access mode does not support interactive workloads when assigned to a service principal.
Solution
Important
Databricks no longer supports using an all-purpose compute with dedicated (formerly single user) access mode assigned to a service principal.
Databricks recommends migrating your all-purpose compute to standard (formerly shared) access mode. Alternatively, you can migrate to dedicated access mode assigned to a user or group instead. The choice depends on your workload use case requirements. Refer to the Compute access mode limitations for Unity Catalog (AWS | Azure | GCP) documentation for more information.
Note
For cases where you need a cluster to be available quickly for a job, Databricks recommends using serverless compute instead. For more information, refer to the Run your Databricks job with serverless compute for workflows (AWS | Azure | GCP) documentation.
When you need to use an all-purpose compute assigned to a service principal:
- Create a job workflow with Job details > Run as set to this service principal.
- Use the API or Python SDK to set the task’s Compute* to the all-purpose compute assigned to the service principal. This assigns the compute to the task.
The following sections provide example code for using the API and the SDK respectively.
For more information, refer to the Orchestration using Databricks Jobs (AWS | Azure | GCP) documentation.
API
curl -X POST https://<your-workspace-host>.cloud.databricks.com/api/2.2/jobs/update \
-H 'Authorization: Bearer <your-token>' \
-H "Content-Type: application/json" \
-d '{
"job_id": "<your-job-id>",
"new_settings": {
"tasks": [
{
"task_key": "<your-task-key-name>",
"existing_cluster_id": "<service-principal-cluster-id>"
}
]
}
}'
Python SDK
from databricks.sdk import WorkspaceClient
from databricks.sdk.service import jobs
# Create a workspace client
w = WorkspaceClient()
# Define the job ID to update
job_id = "<your-job-id>"
# Define the updated settings for the job, including the cluster settings
updated_settings = jobs.JobSettings(
tasks=[
jobs.Task(
task_key="<your-task-key-name>",
existing_cluster_id="<service-principal-cluster-id>", # Provide the cluster ID to use
)
]
)
# Update the job with the new settings
w.jobs.update(job_id=job_id, new_settings=updated_settings)