Unpin cluster configurations using the API
Normally, cluster configurations are automatically deleted 30 days after the cluster was last terminated. If you want to keep specific cluster configurations, you can pin them. Up to 100 clusters can be pinned. If you not longer need a pinned cluster, you can unpin it. If you have pinned 100 clusters, you must unpin a cluster before you can pin anot...
2 min reading timeGenerate a list of all workspace admins
Workspace administrators have full privileges to manage a workspace. This includes adding and removing users, as well as managing all of the data resources (jobs, libraries, notebooks, repos, etc.) in the workspace. Info You must be a workspace administrator to perform the steps detailed in this article. If you are a workspace admin, you can view o...
1 min reading timeBulk update workflow permissions for a group
This article explains how you can use the Databricks Jobs API to grant a single group permission to access all the jobs in your workspace. Info You must be a workspace administrator to perform the steps detailed in this article. Instructions Use the following sample code to give a specific group of users permission for all the jobs in your worksp...
0 min reading timeStop all scheduled jobs
Under normal conditions, jobs run periodically and auto-terminate once their task is completed. In some cases, you may want to stop all scheduled jobs. For more information on scheduled jobs, please review the Create, run, and manage Databricks Jobs (AWS | Azure | GCP) documentation. This article provides sample code that you can use to stop all of ...
0 min reading timeCluster fails with Fatal uncaught exception error. Failed to bind.
Problem Clusters running Databricks Runtime 11.3 LTS or above terminate with a Failed to bind error message. Fatal uncaught exception. Terminating driver. java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:6062 Cause This can happen if multiple processes attempt to use the same port. Databricks Runtime 11.3 LTS and above use the IPython kernel (...
0 min reading timePin cluster configurations using the API
Normally, cluster configurations are automatically deleted 30 days after the cluster was last terminated. If you want to keep specific cluster configurations, you can pin them. Up to 100 clusters can be pinned. Pinned clusters are not automatically deleted, however they can be manually deleted. Info You must be a Databricks administrator to pin a cl...
2 min reading timeINVALID_PARAMETER_VALUE.LOCATION_OVERLAP: overlaps with managed storage error
Problem You are using dbutils to access an external location (AWS | Azure | GCP) that is mounted on managed tables in a shared cluster. When you try to list the path to the location, it fails with an INVALID_PARAMETER_VALUE.LOCATION_OVERLAP error message. The error says the given path overlaps with managed storage. dbutils.fs.ls("<storage-blob&g...
0 min reading timeUpdate the Databricks SQL warehouse owner
Whoever creates a SQL warehouse is defined as the owner by default. There may be times when you want to transfer ownership of the SQL warehouse to another user. This can be done by transferring ownership of Databricks SQL objects (AWS | Azure | GCP) via the UI or the Permissions REST API. Instructions Info The service principal cannot be changed to ...
2 min reading time