I am trying to setup delta lake on S3 using the open source delta lake api . My tables are partitioned by date and I have to perform merge (Merge may also update old partitions) . I am generating manifest files so that I can use AWS Athena to query the delta lake but when I run the manifest file generation method delta lakes creates manifest files for all the partitions . Is there a way to do incremental manifest files generation , create/update files only for the last updated partitions or if you can specify the partitions to produce the manifest files .
df = spark.read.csv(s3://temp/2020-01-01.csv)
delta_table = DeltaTable.forPath(spark, delta_table_path)
delta_table.alias("source").merge(df.alias("new_data"), condition).whenNotMatchedInsertAll().execute()
delta_table.generate("symlink_format_manifest")