I have a python script written in azure databricks for doing ETL on the raw text files in ".txt" format and having no schema stored in Azure datalake V2. I migrated these text files from an on-premises virtual machine using data factory. My requirement is to run the python script only on new data (delta data) migrated into Azure datalake. How can I achieve it?