Problem
While working with a Databricks Runtime version below 16.4 LTS, you notice data skipping is not working as expected on your timestamp columns in Delta Lake tables.
Despite adding the data skipping column to the Delta table and running the necessary commands to recompute statistics, you notice your Databricks SQL queries using this column still result in full table scans and files pruned returning 0. 
Cause
In Databricks Runtime versions below 16.4 LTS, Delta Lake does not apply metadata optimizations to string or timestamp data types by default. Statistics collection for these data types is limited to the first 32 characters, making it challenging to determine accurate minimum or maximum values. As a result, the optimizer skips string or timestamp columns when attempting to apply metadata-based query optimizations, leading to full table scans.
For more information, refer to the Data skipping for Delta Lake (AWS | Azure | GCP) documentation.
For timestamp columns specifically, the issue arises from the same limitation in statistics collection. The optimizer's inability to rely on the collected statistics for timestamp columns means that data skipping is not effectively utilized, resulting in less-than-optimal query performance.
Solution
Upgrade your Databricks Runtime version to 16.4 LTS or above.
If you are not able to upgrade, enable the following Apache Spark configuration to allow metadata query optimizations for timestamp data types. You can set the config either during compute creation or from a notebook.
During compute creation
Add the following Spark setting under the compute’s advanced options.
spark.databricks.delta.optimizeMetadataQuery.enableTimestampDataType = TRUE
For details on how to apply Spark configs, refer to the “Spark configuration” section of the Compute configuration reference (AWS | Azure | GCP) documentation.
From a notebook
Run the following command.
spark.conf.set("spark.databricks.delta.optimizeMetadataQuery.enableTimestampDataType", "true").