I am creating my first AWS Lambda image optimizer script which is working fine on an image at the 'root' of the originating bucket.
I have three buckets. 'mybucket' holds my original existing images I want to resize. I decided to copy them to 'mybucket-photos' which has a python lambda function watching it. The Lammbda function optimizes the incoming photo and saves to 'mybucket-photosresized' .
However, when I try to copy the content of mybucket into mybucket-photos I get failure with regard to the python script file handling and 'subfolder' part of key.
example failure:
No such file or directory: '/tmp/91979758-51b3-44df-b2b1-d9eeddeb0802saddles/thumb/27dfahl/16-5-dk-dressage-saddle-for-sale/saddle_photo02_300.3a38de5F': IOError
My quess is that the folder names with slashes are causing the problem. I understand that the 'folder' is part of the key.
Incidentally, I do not fully understand what the handler method is doing with regard to records, bucket, and key which is making is all the more confusing. My naive instinct is to replace the / somehow and add it back on save.
The python is:
from __future__ import print_function
import boto3
import os
import sys
import uuid
from PIL import Image
import PIL.Image
s3_client = boto3.client('s3')
def resize_image(image_path, resized_path):
with Image.open(image_path) as image:
image.save(resized_path,optimize=True)
def handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
upload_path = '/tmp/resized-{}'.format(key)
s3_client.download_file(bucket, key, download_path)
resize_image(download_path, upload_path)
s3_client.upload_file(upload_path, '{}resized'.format(bucket), key)
Whats the best way to handle the 'folders' in this script ?