I am using azure data factory's copy activity to copy data from a csv file in blob to CosmosDB(with SQL API). In the Sink's linked service if I do not import any schema , my copy activity on execution reads headers from CSV and then saves the data in json form in cosmosDB. Till here it works fine.
I need to add a batch id column in the data being added in cosmosDB (batch id as GUID / pipelinerunID) so that I can track which all data in a set was copied as batch.
How can I keep all my source columns and add my batch id column in it and save it in my cosmos DB.
The schema is not fixed and can change on each adf pipeline trigger so cannot do import schema and do one o one column mapping in copy activity.