I created a test copy pipeline in azure data factory which basically reads 2 sample rows from a text file in azure blob storage and loads it into a table in Azure sql database, the run was successful.
However no records were inserted into the table. The source file is only 34 bytes, i read the minimum block size for azure blob storage is 64KB, could it because my test file is too small that azure failed to read it even though the pipeline ran successfully?