I need to scrape the data from the webpage using azure functions, and
save the scraping output directly to the azure blob storage as a .csv
or .parquet.
I think you can use 'append' method to achieve your requirement:
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, BlobType
try:
connect_str = "DefaultEndpointsProtocol=https;AccountName=1123bowman;AccountKey=xxxxxx;EndpointSuffix=core.windows.net"
container_name = "test"
blob_name = "test.txt"
data1 = "\n1,2,3"
data = str.encode(data1)
blob_service_client = BlobServiceClient.from_connection_string(connect_str)
blob_client = blob_service_client.get_blob_client(container=container_name, blob=blob_name)
print("length of data is "+str(len(data)))
blob_client.append_block(data,length=len(data))
except:
connect_str = "DefaultEndpointsProtocol=https;AccountName=1123bowman;AccountKey=xxxxxx;EndpointSuffix=core.windows.net"
container_name = "test"
blob_name = "test.txt"
data1 = "test1,test2,test3"
data = str.encode(data1)
blob_service_client = BlobServiceClient.from_connection_string(connect_str)
blob_client = blob_service_client.get_blob_client(container=container_name, blob=blob_name)
print("length of data is "+str(len(data)))
blob_client.upload_blob(data,blob_type=BlobType.AppendBlob)
Through the above method, you don’t have to store the entire blob every time.