Please pardon my ignorance if this question may sound silly the expert audience here
Currently as per my use case I am performing certain analysis on the data present in aws redshift tables and saving them a csv file in s3 buckets (operation is some what similar to Pivot for redshift database) and after that i am updating data back to redshift db using copy command
Currently after performing analysis (which is done in python3) for 200 csv files are generated which are saved in 200 different table in redshift
The count of csv would keep on increasing with time Currently the whole process takes about 50-60 minutes to complete
25 minutes to get approx 200 csv and update them in s3 buckets
25 minutes to update the approx 200 csv into 200 aws redshift tables
The size of csv vary form few MB to 1GB
I was looking for tools or aws technologies which can help me reduce my time
*additional info
Structure of csv keeps on changing .Hence i have to drop and create tables again This would be a repetitive tasks and would be executed in every 6hours