We have a parent bucket, say "Bucket-1". Then under this Bucket-1, we have multiple folders, 1 for each customer (say cust1, cust2, cust3 and so on). We upload Gb's of data into these folders for each customer. And this data is uploaded via multiple channels, via JS, via JAVA, via Swift IOS etc.
Requirement: Now we've got a requirement from a specific customer to upload it's data(forthecoming) into some other dedicated bucket say "Bucket-2" (so that some special permissions can be provided to the customer for read access via another AWS account of that customer).
Solution(s) that I came up with:**
Code modifications in all the channels (JS, Java, Swift) to accommodate this change. But this solutions is a) time taking, b) tightly coupling an upload logic with a specific requirement.
Using AWS lambda to move data between buckets. This lambda will be triggered with any operation done to that specific client's bucket folder.
For now #2 seems the best fit to me. Any suggestions? Or any other solution that comes up to your mind?
Bucket-1
. If the trigger cannot be made on the specific path then you can at least add that logic to the Lambda function so you only make the copy if the path is prepended with a specific string. – Joshua Kemmerer