I creating Data Factory Pipeline to Load Initial and Incremental into Data Lake from Az MySql database to an Az SQL Server database.
Initial Pipeline to load data from MySql to Data Lake is all good. Is being persisted as .parquet
files.
Now I need to load these into a SQL Server table with some basic type conversions. What is the best way?
Databricks
=> mount these .parquet
files, standardised and load into SQL Server tables?
Or can I create an external source to these files in SQL Server on Azure and do standardisation. We are not on Synapse (dwh)
yet.
Or is there better way?