Job fails, but Apache Spark tasks finish

Your job fails, but all of the Apache Spark tasks have completed successfully. You are using spark.stop() or System.exit(0) in your code.

Written by harikrishnan.kunhumveettil

Last published at: May 10th, 2022

Problem

Your Databricks job reports a failed status, but all Spark jobs and tasks have successfully completed.

Cause

You have explicitly called spark.stop() or System.exit(0) in your code.

If either of these are called, the Spark context is stopped, but the graceful shutdown and handshake with the Databricks job service does not happen.

Solution

Do not call spark.stop() or System.exit(0) in Spark code that is running on a Databricks cluster.