Unfortunately in Databricks zip files are not supported, reason is that Hadoop does not have support for zip files as a compression codec. While a text file in GZip, BZip2, and other supported compression formats can be configured to be automatically decompressed in Spark as long as it has the right file extension, you must perform additional steps to read zip files. The sample in the Databricks documentation does the unzip on the driver node using unzip on the OS level (Ubuntu).
If your data source can' t provide the data in a compression codec supported by Spark, best method is using Azure Data Factory copy activity. Azure Data Factory supports more compression codecs, also zip is supported.
Type property definition for the source would look like this:
"typeProperties": {
"compression": {
"type": "ZipDeflate",
"level": "Optimal"
},
You can also use Azure Data Factory to orchestrate your Databricks pipelines with the Databricks activities.