2
votes

Can we create a file (preferably json) and store it in its supported storage sinks (like Blob, Azure Data Lake Service etc) using the parameters that are passed to Azure Data Factory v2 pipeline at run-time. I suppose it can be done via Azure Batch but it seems to be an overkill for such a trivial task. Is there a better way to do that?

1
Note sure what solution you are looking for.You mean you want to create file in adf pipeline? What's the data in that file?What's the function of the parameter which you want to pass to adf? Please give more details so that I will try to help you.Jay Gong
We have been using a custom built orchestration tools for running jobs. We used to pass json files as parameters, that contained metadata, that our Spark applications could use and execute. Now there is an initiative to consider ADFv2 and I need to sort out that if I pass the metadata, that was previously provided as json files, as ADFv2 run-time pipeline parameters, then can we somehow retrieve those parameters, create JSON out of them and pass to Spark Jobs like we did before to minimise code change. I hope the context is clear now.rh979

1 Answers

-1
votes

Here are all the transform activities ADFv2 currently equips with, I'm afraid there isn't a direct way to create a file in ADFv2. You could leverage Custom activity to achieve this by running your customized code logic on an Azure Batch pool of virtual machines. Hope it'll help a little.