Is "Derived Column" step in Data Flow of Azure Data Factory able to dynamically detect empty value and replace with NULL value while looping through several files with different numbers of columns?
What I am trying to do is to copy 10 csv files to 10 tables straight without any manipulation but only replace empty value with NULL. Each of these 10 files contain 5-25 columns. I know I can build 10 data flow to achieve this but was wondering if there is a smarter way to LOOKUP the files list and FOREACH file dynamically replace empty values?
Update:
Thanks to Joseph's answer. It seems data flow automatically treats empty value as NULL. But I then encountered another parsing problem that date / time / datetime columns in Sink receive NULL value.
Source: The positions of date / time / datetime columns are not fixed in each file, neither with the same column name. Since the source is in CSV format, all columns are treated as string type and cannot use "type" for pattern matching: https://i.stack.imgur.com/X1C7z.png
Sink: From the example above, all the highlighted columns are transferred as NULL in Sink: https://i.stack.imgur.com/PecRb.png