3
votes

I've successfully mounted my blob storage to Databricks, and can see the defined mount point when running dbutils.fs.ls("/mnt/"). This has size=0 - it's not clear if this is expected or not.

When I try and run dbutils.fs.ls("/mnt/<mount-name>"), I get this error: java.io.FileNotFoundException: / is not found

When I try and write a simple file to my mounted blob with dbutils.fs.put("/mnt/<mount-name>/1.txt", "Hello, World!", True), I get the following error (shortened for readability):

ExecutionError: An error occurred while calling z:com.databricks.backend.daemon.dbutils.FSUtils.put. : shaded.databricks.org.apache.hadoop.fs.azure.AzureException: java.util.NoSuchElementException: An error occurred while enumerating the result, check the original exception for details.
...
Caused by: com.microsoft.azure.storage.StorageException: The specified resource does not exist.

All the data is in the root of the Blob container, so I have not defined any folder structures in the dbutils.fs.mount code.

thinking emoji

1

1 Answers

2
votes

The solution here is making sure you are using the 'correct' part of your Shared Access Signature (SAS). When the SAS is generated, you'll find there are lots of different parts of it that you can use - it's likely sent to you as one long connection string, e.g:

BlobEndpoint=https://<storage-account>.blob.core.windows.net/;QueueEndpoint=https://<storage-account>.queue.core.windows.net/;FileEndpoint=https://<storage-account>.file.core.windows.net/;TableEndpoint=https://<storage-account>.table.core.windows.net/;SharedAccessSignature=sv=<date>&ss=nwrt&srt=sco&sp=rsdgrtp&se=<datetime>&st=<datetime>&spr=https&sig=<long-string>

When you define your mount point, use the value of the SharedAccessSignature key, e.g:

sv=<date>&ss=nwrt&srt=sco&sp=rsdgrtp&se=<datetime>&st=<datetime>&spr=https&sig=<long-string>