Performing count on Delta table using dedicated vs standard compute

Using a dedicated compute, you will be able to get the expected output by performing the count on the version of the Delta table from which the associated files are removed.

Written by shubham.bhusate

Last published at: June 18th, 2025

Problem

When you use a standard (formerly shared) compute to perform a count on a Delta table version, the count fails with the following error. 

Error while reading file <cloud-provider>://<bucket-name>/<folder-1>/<folder-2>/<folder-3>/<folder-4>/<file-name>.  [DELTA_FILE_NOT_FOUND_DETAILED] File

 

Cause

A standard compute reads data from the Parquet files. When data files are removed from a table version, it cannot see the required data files to perform the count. 

 

By contrast, a dedicated (formerly single-user) compute reads data from the transaction logs JSON file inside the _delta_logs directory, which records every change made to a Delta table. It has access to the changes required to perform the count on the table version.

 

Solution

You can safely switch to a dedicated compute and rerun the count operation on the Delta table version with removed files.