0
votes

We did quick experiment on AWS lambda to confirm concurrent execution limit based on our VPC IP limit? In our AWS account VPC has around 500 IPs available. In general, if AWS lambda is running inside VPC, available IPs can be exhausted if we have more concurrent lambda's running than available IPs. Below are the experiment details and Lambda functions.

We wrote a lambda caller (refer#1 below) function which invokes lambda called function (refer#2) configured inside VPC. We invoked a lambda called function around 999 times and made sure that all these should run concurrently. But surprisingly all lambda's finished without any complain.

The biggest question is, if we had 500 IP limit in our VPC and we ran lambda 999 times inside VPC, why we didn't get IP availability issue? Any idea?

1. Lambda Caller Function (Node.js 10.x)

const AWS = require('aws-sdk');
const lambda = new AWS.Lambda();

const duration = 3 * 60 * 1000;

exports.handler = async (event) => {

    try {

        const lambdaPosterParams = {
            FunctionName: 'testCalledFunction',
            InvocationType: 'Event',
            LogType: 'None'
        };

        for (var invokationNumber=0; invokationNumber<999; invokationNumber++) {
            console.log("Invoking lambda #" + invokationNumber);
            lambdaPosterParams.Payload = JSON.stringify({
                'invokationNumber': invokationNumber,
                'duration': duration,
                'tableName': 'testload2'
            });
            const posterResponse = await lambda.invoke(lambdaPosterParams).promise();
            console.log("Poster Lambda invoked", JSON.stringify(posterResponse));
        }
    } catch (error){
        console.error('Error invoking lambda #' + invokationNumber, error);
        throw error;

    }
    console.log("All lambdas invoked");
    const response = {
        statusCode: 200,
        body: JSON.stringify('Hello from Lambda!'),
    };
    return response;
};

2. Lambda Called Function (Node.js 10.x)

const AWS = require('aws-sdk');

const dbConnection = new AWS.DynamoDB({region: process.env.AWS_REGION, apiVersion: process.env.AWS_DYNAMODB_API_VERSION});

exports.handler = async (event) => {

    const startTime = new Date();
    const insertData = {
        'TableName': 'testload',
        'Item': {
            'invokationNumber': {'N': event.invokationNumber.toString()},
            'startTime': {'S': startTime.toUTCString()},
        }
    };

    await dbConnection.putItem(insertData).promise();


    console.log(`Event #${event.invokationNumber}. Sleeping...`);

    await timeout(3 * 60 * 1000);

    console.log('Waking up...');

    const endTime = new Date();
    insertData.Item.endTime = {'S': endTime.toUTCString()};
    insertData.Item.duration = {'N': (endTime.getTime() - startTime.getTime()).toString()};

    await dbConnection.putItem(insertData).promise();

    const response = {
        statusCode: 200,
        body: JSON.stringify('Hello from Lambda!'),
    };

    return response;

    function timeout(ms) {
        return new Promise(resolve => setTimeout(resolve, ms));
    }
};
1
Interesting! You could try adding the function's IP address to the Lambda debug output to see what ENI it thinks it is using. (Not that I'm sure how to retrieve that information!) - John Rotenstein
I would also suggest adding a time-delay within the Lambda function to make sure it runs for several seconds, which will increase the probability of the functions running concurrently. Otherwise, some will complete before new ones start. - John Rotenstein
I have kept 3 min delay. What I am doing is, when lambda is invoked, I insert a row ( count, start time, end time, time elapsed) to dynamo DB table. After invoking lambda sleeps for 3 min and when it wakes up it updates same row with end time and time elapsed. Within a min I could see 999 rows are inserted having count and start time. After 4 min all rows are updated with end time and elapsed time. - Narendra Verma

1 Answers

2
votes

ENIs are not needed at a level of 1 per concurrent invocation, unless your function is configured to be allocated the maximum allowable 3 GB of memory per invocation.

At 1.5 GB you can have two concurrent invocations, at 1 GB it's three, at 512 MB it's six and at 128 MB it's approximately 24 concurrent invocations per Elastic Network Interface (ENI).

Approximately.

This is because your containers are allocated on m-class EC2 instances (or something extremely comparable), one ENI per instance, and each instance has 3 GB of usable memory available for containers. The smaller the memory assigned to the function, the more containers per (hidden, managed) EC2 instance, so the fewer ENIs are required for a given level of concurrency.

If your Lambda function accesses a VPC, you must make sure that your VPC has sufficient ENI capacity to support the scale requirements of your Lambda function. You can use the following formula to approximately determine the ENI requirements.

Projected peak concurrent executions * (Memory in GB / 3GB)

https://docs.aws.amazon.com/lambda/latest/dg/vpc.html

Configure your function for 3 GB and your original expectations should be confirmed -- you'll run out of ENIs, either due to a lack of IP addresses, or to your account limit for maximum number of ENIs in the region.