Delta external table metadata does not match the Catalog explorer view

Run the MSCK REPAIR command.

Written by saikumar.divvela

Last published at: December 17th, 2025

Problem

When you try to create a new table, alter an existing one, or modify columns within certain schemas, you notice your table and column metadata are not synchronized in Catalog Explorer.

 

You can fully query your tables and columns, but column metadata does not appear. You may also notice column changes (such as renames or additions) are visible when queried but not reflected in the Catalog view.

 

Cause

The catalog metadata is not automatically updating to reflect the latest table schema. The Apache Spark configuration property spark.databricks.delta.catalog.update.enabled is set to false.

 

When disabled, metadata changes (such as adding, renaming, or dropping columns) are not automatically propagated to the Databricks Catalog Service.

 

As a result, schema updates performed on the table are not reflected in the Catalog Explorer. 

 

Solution

For tables where metadata is not properly reflected in the catalog, manually synchronize metadata by running the following SQL command. This command refreshes and aligns the catalog metadata with the table’s current structure.

MSCK REPAIR TABLE <catalog>.<schema>.<table-name> SYNC METADATA;

 

To avoid similar issues in the future, enable automatic metadata synchronization by setting the following Spark property to true at the compute level.

spark.conf.set("spark.databricks.delta.catalog.update.enabled", "true")

 

For details on how to apply Spark configs, refer to the “Spark configuration” section of the Compute configuration reference (AWS | Azure | GCP) documentation.