Updated December 15th, 2022 by akash.bhat

Object ownership is getting changed on dropping and recreating tables

Problem Ownership of SQL objects changes after dropping and recreating them. This can result in job failures due to permission issues. Cause In Databricks Runtime 7.3 LTS, when jobs are run with table ACLs turned off, any action that drops and recreates tables or views preserves the table ACLs that was set the last time the job was run with table AC...

0 min reading time
Updated September 12th, 2024 by akash.bhat

Sync fails with [UPGRADE_NOT_SUPPORTED.HIVE_SERDE] Table is not eligible for upgrade from Hive Metastore to Unity Catalog

Problem While trying to upgrade a table from Hive metastore to Unity Catalog, you encounter the following error.  [UPGRADE_NOT_SUPPORTED.HIVE_SERDE] Table is not eligible for upgrade from Hive Metastore to Unity Catalog. Reason: Hive SerDe table. SQLSTATE: 0AKUC. Cause The error occurs because the Unity Catalog  SYNC command cannot process tables cr...

0 min reading time
Updated September 12th, 2024 by akash.bhat

No way to restore dropped managed volumes

Problem When performing actions such as moving resources between schemas, you may face a risk of not being able to restore an inadvertently dropped managed volume. Note Data retention policies are distinct from issues around dropped volumes.   Cause Restoring a managed volume requires altering the backend database, which is not permitted. Solution I...

0 min reading time
Updated July 8th, 2024 by akash.bhat

Too many execution contexts are open right now

Problem You come across the below error message when you try to attach a notebook to a cluster or in a job failure.  Run result unavailable: job failed with error message Too many execution contexts are open right now.(Limit set currently to 150) Cause Databricks create an execution context when you attach a notebook to a cluster. The execution cont...

1 min reading time
Load More