16
votes

I have to use AWS lambda in various stack of my application, thus I have created a generic cloud-formation template to create a lambda function. This template can be included in another cloud-formation template for further use as a nested stack.

    # Basics
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation Template to create a lambda function for java 8 or nodejs

# Parameters
Parameters:
  FunctionName:
    Type: String
    Description: Funciton Name
  HandlerName:
    Type: String
    Description: Handler Name
  FunctionCodeS3Bucket:
    Type: String
    Description: Name of s3 bucket where the function code is present
    Default: my-deployment-bucket
  FunctionCodeS3Key:
    Type: String
    Description: Function code present in s3 bucket
  MemorySize:
    Type: Number
    Description: Memory size between 128 MB - 1536 MB and multiple of 64
    MinValue: '128'
    MaxValue: '1536'
    Default: '128'
  RoleARN:
    Type: String
    Description: Role ARN for this function
  Runtime:
    Type: String
    Description: Runtime Environment name e.g nodejs, java8
    AllowedPattern: ^(nodejs6.10|nodejs4.3|java8)$
    ConstraintDescription: must be a valid environment (nodejs6.10|nodejs4.3|java8) name.
  Timeout:
    Type: Number
    Description: Timeout in seconds
    Default: '3'
  Env1:
    Type: String
    Description: Environment Variable with format Key|value
    Default: ''
  Env2:
    Type: String
    Description: Environment Variable with format Key|value
    Default: ''
  Env3:
    Type: String
    Description: Environment Variable with format Key|value
    Default: ''
  Env4:
    Type: String
    Description: Environment Variable with format Key|value
    Default: ''

# Conditions
Conditions:
  Env1Exist: !Not [ !Equals [!Ref Env1, '']]
  Env2Exist: !Not [ !Equals [!Ref Env2, '']]
  Env3Exist: !Not [ !Equals [!Ref Env3, '']]
  Env4Exist: !Not [ !Equals [!Ref Env4, '']]

# Resources
Resources:
  LambdaFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        S3Bucket: !Ref 'FunctionCodeS3Bucket'
        S3Key: !Ref 'FunctionCodeS3Key'
      Description: !Sub 'Lambda function for: ${FunctionName}'
      Environment:
        Variables:
          'Fn::If':
            - Env1Exist
            -
              - !Select [0, !Split ["|", !Ref Env1]]: !Select [1, !Split ["|", !Ref Env1]]
              - 'Fn::If':
                - Env2Exist
                - !Select [0, !Split ["|", !Ref Env2]]: !Select [1, !Split ["|", !Ref Env2]]
                - !Ref "AWS::NoValue"
              - 'Fn::If':
                - Env3Exist
                - !Select [0, !Split ["|", !Ref Env3]]: !Select [1, !Split ["|", !Ref Env3]]
                - !Ref "AWS::NoValue"
              - 'Fn::If':
                - Env4Exist
                - !Select [0, !Split ["|", !Ref Env4]]: !Select [1, !Split ["|", !Ref Env4]]
                - !Ref "AWS::NoValue"
            - !Ref "AWS::NoValue"
      FunctionName: !Ref 'FunctionName'
      Handler: !Ref 'HandlerName'
      MemorySize: !Ref 'MemorySize'
      Role: !Ref 'RoleARN'
      Runtime: !Ref 'Runtime'
      Timeout: !Ref 'Timeout'
Outputs:
  LambdaFunctionARN:
    Value: !GetAtt 'LambdaFunction.Arn'

I want to inject the environment variables to the the function and that will be passed from parent stack as below:

# Resouces
Resources:
  # Lambda for search Function
  ChildStackLambdaFunction:
    Type: AWS::CloudFormation::Stack
    Properties:
      TemplateURL: <<REF_TO_ABOVE_LAMBDA_STACK.yml>>
      Parameters:
        FunctionName: test
        HandlerName: 'index.handler'
        FunctionCodeS3Bucket: <<BUCKET_NAME>>
        FunctionCodeS3Key: <<FUNCTION_DEPLOYMENT_NAME>>
        MemorySize: '256'
        RoleARN: <<ROLE_ARN>>
        Runtime: nodejs6.10
        Timeout: '60'
        Env1: !Sub 'AWS_REGION|${AWS::Region}'

When I deploy this stack, I am getting below error. Can anybody help me to resolve this one?

Template format error: [/Resources/LambdaFunction/Type/Environment/Variables/Fn::If/1/0] map keys must be strings; received a map instead

Passing key-value parameter is referred from here

2

2 Answers

9
votes

So, I tried so many ways to achieve this, but we can not pass the dynamic key-value pair to nested lambda stack from the parent stack. I had a confirmation from the AWS support that this is not possible as this moment.

They suggested a another way which I liked and implemented and its mentioned as below:

Pass the key: value pair as a JSON string and parse it appropriately in the lambda function.

Environment:
  Variables:
    Env1: '{"REGION": "REGION_VALUE", "ENDPOINT": "http://SOME_ENDPOINT"}'  

This suggestion has a little overhead on programming to parse the JSON string, but at this moment I will recommend this as solution for above problem.

0
votes

I achieved this with the PyPlate macro. Take environment variables list in a commalimited

Parameters:
  EnvVars:
    Type: CommaDelimitedList
    Description: Comma separated list of Env vars key=value pairs (key1=value1,key2=value2)

and use it in the Lambda Resource:

  Environment:
    Variables: |
      #!PyPlate
      output = dict()
      for envVar in params['EnvVars']:
        key, value = envVar.split('=')
        output.update({key: value})