1
votes

I have two buckets Bucket A which is a primary bucket and Bucket B is a secondary bucket.

I have a cloudfront distribution which has the Origin group with both the bucket.

My lambda function will read my API and extract the file path to render.

Is there a way I can design the lambda@edge function which can check which bucket has the requested file and use that origin group dynamically?

1
Just to confirm, you have an origin for each s3 bucket? - Chris Williams
I have two buckets. I have created a distribution and two origin groups for these buckets. Yes an origin for each s3 bucket - cloudbud

1 Answers

0
votes

The solution you're looking for is as follows:

  • Create a Lambda@Edge function for the origin request behaviour.
  • When triggered use SDK to do a check in one of the buckets to check if its there.
  • If it is then update the origin/path in the request to become the correct destination.

By calling the head-object command against the first bucket you can verify the object exists there.

It would look similar to this pseudocode

import boto3

client = boto3.client('s3')

def lambda_handler(event, context):
    request = event['Records'][0]['cf']['request']"
    key = //Code to get the key path from request
    try:
        s3_client.head_object(Bucket='string', Key=key)
    except ClientError as e:
        return request

    request.origin.s3.domainName = \\Insert domain name here
    return request

An in depth tutorial can be found here.