There is plenty of documentation on how to use Azure Data Factory to read data from blobs into SQL, and even documentation on how to dump the output of a query to a single blob. I'm trying to create one blob for each row in a table (on Azure SQL Server), named by one field and containing the data in another.
My table has a GUID id field and a nvarchar(max) data field (which contains JSON, though that's largely irrelevant). Suppose I have the following rows:
id | data
---------------------------------------+----------
38b2f551-5f13-40ce-8512-c108a05ecd44 | foo
4db5b25b-1194-44e9-a7b2-bc8889c32979 | bar
2a3bd653-ce14-4bd2-9243-6923e97224c6 | baz
I want the following blobs to be created:
https://mycontainer.blob.core.windows.net/myblobs/38b2f551-5f13-40ce-8512-c108a05ecd44
https://mycontainer.blob.core.windows.net/myblobs/4db5b25b-1194-44e9-a7b2-bc8889c32979
https://mycontainer.blob.core.windows.net/myblobs/2a3bd653-ce14-4bd2-9243-6923e97224c6
Their contents should be the associated data field, i.e. foo, bar, and baz, respectively.
Data Factory v2 - Generate a json file per row has an answer that will work, but it involves querying the database once to get all the ids, then N more times to get the data from each row. It seems like it should be possible to query just once for both fields and use one for filename and one for contents, but I haven't been able to figure out how.





