0
votes

Using spark overwrite mode to write dataset deletes old files in the partitions and write the new data. Is this process atomic ? If the job fails while overwriting the data, will spark revert the old files present in the partitions ?

2

2 Answers

1
votes

According to this post in databricks is not (emphasys mine):

It is sometimes useful to atomically overwrite a set of existing files. Today, Spark implements overwrite by first deleting the dataset, then executing the job producing the new data. This interrupts all current readers and is not fault-tolerant. With transactional commit, it is possible to “logically delete” files atomically by marking them as deleted at commit time

But they also offer an alternative to achieve atomic overwrite

0
votes

The overwrite operation is atomic as it firsts delete the old dataset and then produces the new data, however you can loose data in event the job or any of the task fails. Also if some other job is reading the dataset, will also fail as the file has been deleted. With transnational commit, it is possible to “logically delete” files atomically by marking them as deleted at commit time.

Atomic overwrite can be toggled by setting "spark.databricks.io.directoryCommit.enableLogicalDelete true|false"