I'm a total noob to working with AWS. I am trying to get a pretty simple and basic operation to work. What I want to do is, upon a file being uploaded to one s3 bucket, I want that upload to trigger a Lambda function that will copy that file to another bucket.
I went to the AWS management console, created an s3 bucket on the us-west2 server called "test-bucket-3x1" to use as my "source" bucket and another called "test-bucket-3x2" as my 'destination' bucket. I did not change or modify any settings when creating these buckets.
In the Lambda console, I created an s3 trigger for the 'test-bucket-3x1', changed 'event type' to "ObjectCreatedByPut", and didn't change any other settings.
This is my actual lamda_function code:
import boto3
import json
s3 = boto3.resource('s3')
def lambda_handler(event, context):
bucket = s3.Bucket('test-bucket-3x1')
dest_bucket = s3.Bucket('test-bucket-3x2')
print(bucket)
print(dest_bucket)
for obj in bucket.objects():
dest_key = obj.key
print(dest_key)
s3.Object(dest_bucket.name, dest_key).copy_from(CopySource = {'Bucket': obj.bucket_name, 'Key': obj.key})
When I test this function with the basic "HelloWorld" test available from the AWS Lambda Console, I receive this"
{
"errorMessage": "'s3.Bucket.objectsCollectionManager' object is not callable",
"errorType": "TypeError",
"stackTrace": [
[
"/var/task/lambda_function.py",
12,
"lambda_handler",
"for obj in bucket.objects():"
]
]
}
What changes do I need to make to my code in order to, upon uploading a file to test-bucket-3x1, a lambda function is triggered and the file is copied to test-bucket-3x2?
Thanks for your time.
for obj in bucket.objects.all()
instead offor obj in bucket.objects()
. Refer this link: boto3.readthedocs.io/en/latest/reference/services/… – krishna_mee2004bucket.objects.all()
which creates an iterable – Usernamenotfoundevent
andcontext
objects actually are. On a related note, is it possible to open/work with a file in an s3 bucket via lambda? For instance, could I load a csv into a pandas dataframe, manipulate the dataframe, return the manipulated dataframe, and then upload that to my destination bucket? Would it be something as simple as putting with the event handler something likedf = pd.read_excel(['Records']['bucket']['object']['key'])
? – Tkelly