4
votes

Problem

I'm using mssql v6.2.0 in a Lambda that is invoked frequently (consistently ~25 concurrent invocations under standard load).

I seem to be having trouble with connection pooling or something because I keep having tons of open DB connections which overwhelm my database (SQL Server on RDS) causing the Lambdas to just time out waiting for query results.

I have read the docs, various similar questions, Github issues, etc. but nothing has worked for this particular issue.

Things I've Learned Already

  • I did learn that pooling is possible across invocations due to the fact that variables outside the handler function are shared across invocations in the same container. This makes me think I should see just a few connections for each container running my Lambda, but I don't know how many that is so it's hard to verify. Bottom line is that pooling should keep me from having tons and tons of open connections, so something isn't working right.
  • There are several different ways to use mssql and I have tried several of them. Notably I've tried specifying max pool size with both large and small values but got the same results.
  • AWS recommends that you check to see if there's already a pool before trying to create a new one. I tried that to no avail. It was something like pool = pool || await createPool()
  • I know that RDS Proxy exists to help with situations like this, but it appears it isn't offered (at this time) for SQL Server instances.
  • I do have the ability to slow down my data a bit, but this has a slight impact on the performance of the product as a whole, so I don't want to do that just to avoid solving a DB connections issue.
  • Left unchecked, I saw as many as 700 connections to the DB at once, leading me to think there's a leak of some kind and it's maybe not just a reasonable result of high usage.
  • I didn't find a way to shorten the TTL for connections on the SQL Server side as recommended by this re:Invent slide. Perhaps that is part of the answer? enter image description here

Code

'use strict';

/* Dependencies */
const sql = require('mssql');
const fs = require('fs').promises;
const path = require('path');
const AWS = require('aws-sdk');
const GeoJSON = require('geojson');

AWS.config.update({ region: 'us-east-1' });
var iotdata = new AWS.IotData({ endpoint: process.env['IotEndpoint'] });

/* Export */

exports.handler = async function (event) {

    let myVal= event.Records[0].Sns.Message;

    // Gather prerequisites in parallel
    let [
        query1,
        query2,
        pool
    ] = await Promise.all([
        fs.readFile(path.join(__dirname, 'query1.sql'), 'utf8'),
        fs.readFile(path.join(__dirname, 'query2.sql'), 'utf8'),
        sql.connect(process.env['connectionString'])
    ]);

    // Query DB for updated data
    let results = await pool.request()
        .input('MyCol', sql.TYPES.VarChar, myVal)
        .query(query1);

    // Prepare IoT Core message
    let params = {
        topic: `${process.env['MyTopic']}/${results.recordset[0].TopicName}`,
        payload: convertToGeoJsonString(results.recordset),
        qos: 0
    };

    // Publish results to MQTT topic
    try {
        await iotdata.publish(params).promise();
        console.log(`Successfully published update for ${myVal}`);

        //Query 2
        await pool.request()
            .input('MyCol1', sql.TYPES.Float, results.recordset[0]['Foo'])
            .input('MyCol2', sql.TYPES.Float, results.recordset[0]['Bar'])
            .input('MyCol3', sql.TYPES.VarChar, results.recordset[0]['Baz'])
            .query(query2);
        
    } catch (err) {
        console.log(err);
    }
};

/**
 * Convert query results to GeoJSON for API response
 * @param {Array|Object} data - The query results
 */
function convertToGeoJsonString(data) {
    let result = GeoJSON.parse(data, { Point: ['Latitude', 'Longitude']});
    return JSON.stringify(result);
}

Question

Please help me understand why I'm getting runaway connections and how to fix it. For bonus points: what's the ideal strategy for handling high DB concurrency on Lambda?

Ultimately this service needs to handle several times the current load -- I realize this becomes a quite intense load. I'm open to options like read replicas or other read-performance-boosting measures as long as they're compatible with SQL Server, and they're not just a cop out for writing proper DB access code.

Please let me know if I can improve the question. I know there are similar ones out there but I have read/tried a lot of them and didn't find them to help. Thanks in advance!

Related Material

2

2 Answers

3
votes

Answer

I finally found the answer after 4 days of effort. All I needed to do was scale up the DB. The code is actually fine as-is.

I went from db.t2.micro to db.t3.small (or 1 vCPU, 1GB RAM to 2 vCPU and 2GB RAM) at a net cost of roughly $15/mo.

Theory

In my case, the DB probably couldn't handle the processing (which involves several geographic calculations) for all my invocations at once. I did see CPU go up, but I assumed that was a result of the high open connections. When the queries slowed down, the concurrent invocations pile up as Lambdas start to wait for results, finally causing them to time out and not close their connections properly.

Comparisions:

db.t2.micro:

  • 200+ DB connections (goes up continuously if you leave it running)
  • 50+ concurrent invocations
  • 5000+ ms Lambda duration when things slow down, ~300ms under no load

db.t3.small:

  • 25-35 DB connections (constantly)
  • ~5 concurrent invocations
  • ~33 ms Lambda duration <-- ten times faster!

CloudWatch Dashboard

CloudWatch Dashboard

Summary

I think this issue was confusing to me because it didn't smell like a capacity issue. Almost every time I've dealt with high DB connections in the past, it has been a code error. Having tried options there, I thought it was "some magical gotcha of serverless" that I needed to understand. In the end it was as simple as changing DB tiers. My takeaway is that DB capacity issues can manifest themselves in ways other than high CPU and memory usage, and that high connections may be a result of something besides a code bug.

Update (4 months in)

This continues to work very well. I'm impressed that doubling the DB resources seems to have given > 2x performance. Now, when due to load (or a temporary bug during development), the db connections get really high (even over 1k) the DB handles it. I'm not seeing any issues at all with db connections timing out or the database getting bogged down due to load. Since the original time of writing I've added several CPU-intensive queries to support reporting workloads, and it continues to handle all these loads simultaneously.

We've also deployed this setup to production for one customer since the time of writing and it handles that workload without issue.

0
votes

So a connection pool is no good on Lambda at all what you can do is reuse connections.

Trouble is every Lambda execution opens a pool it'll just flood the DB like you're getting, you want 1 connection per lambda container, you can use a db class like so (this is rough but lemmy know if you've got questions)

    export default class MySQL {

    constructor() {

        this.connection = null
    }

    async getConnection() {

        if (this.connection === null || this.connection.state === 'disconnected') {

            return this.createConnection()
        }

        return this.connection


    }

    async createConnection() {

        this.connection = await mysql.createConnection({
            host: process.env.dbHost,
            user: process.env.dbUser,
            password: process.env.dbPassword,
            database: process.env.database,
        })


        return this.connection
    }

    async query(sql, params) {

        await this.getConnection()

        let err
        let rows
        [err, rows] = await to(this.connection.query(sql, params))

        if (err) {

            console.log(err)
            return false
        }

        return rows
    }

}

function to(promise) {
    return promise.then((data) => {
        return [null, data]
    }).catch(err => [err])
}

What you need to understand is A lambda execution is a little virtual machine that does a task and then stops, it does sit there for a while and if anyone else needs it then it gets reused along with the container and connection for a single task there's never multiple connections to a single lambda.

Hope this helps let me know if ya need any more detail! Oh and welcome to stackoverflow, that's a well-constructed question.