I have my lambda function is trying to connect to an RDS PostGreSQL DB. Since I use https://serverless.com/ to deploy the function (sets up my cloudfront) it puts the LF in a separate VPC from the RDS DB.
Not a big issue. If you read: https://docs.aws.amazon.com/lambda/latest/dg/services-rds-tutorial.html you see you can setup the serverless.yml file (as below) with the subnet, and security Group IDs, and then give a role to the Lambda Function that has AWSLambdaVPCAccessExecutionRole (I gave it full for the VPC and for Lambda). If you don't do this you will get a ECONNREFUESED.
But even after doing this I get an 3D00 error, which says that the db with name "ysgdb" is not found. But in RDS I see it is there and is public.
The code works fine when the DB is set to a local PostGreSQL.
Any ideas where to go next?
# serverless.yml
service: lambdadb
provider:
name: aws
stage: dev
region: us-east-2
runtime: nodejs10.x
functions:
dbConn:
# this is formatted as <FILENAME>.<HANDLER>
handler: main.handler
vpc:
securityGroupIds:
- sg-a1e6f4c3
subnetIds:
- subnet-53b45038
- subnet-4a2a7830
- subnet-1469d358
events:
- http:
path: lambdadb
method: post
cors: true
- http:
path: lambdadb
method: get
cors: true
REPLY
{
"statusCode": 200,
"headers": {
"Content-Type": "application/json"
},
"body": {
"name": "error",
"length": 90,
"severity": "FATAL",
"code": "3D000",
"file": "postinit.c",
"line": "880",
"routine": "InitPostgres"
},
"isBase64Encoded": false
}
server.js
const aws = {
user: "postgres",
host: "ysgdb.cxeokcheapqj.us-east-2.rds.amazonaws.com",
database: "ysgdb",
password: "***",
port: 5432
}
console.log(`PostgreSQL GET Function`)
const { Client } = require('pg')
const local = {
user: "postgres",
host: "localhost",
database: "m3_db",
password: "xxxx",
port: 5432
}
const aws = {
user: "postgres",
host: "ysgdb.cxeokcheapqj.us-east-2.rds.amazonaws.com",
database: "ysgdb",
password: "xxxx",
port: 5432
}
let response = {
"statusCode": 200,
"headers": {
"Content-Type": "application/json"
},
"body": 'none',
"isBase64Encoded": false
}
exports.handler = async (event, context, callback) => {
const c = aws // switch to local for the local DB
console.log(`aws credentials: ${JSON.stringify(c)}`)
const client = new Client({
user: c.user,
host: c.host,
database: c.database,
password: c.password,
port: c.port
})
try {
await client.connect();
console.log(`DB connected`)
} catch (err) {
console.error(`DB Connect Failed: ${JSON.stringify(err)}`)
response.body = err
callback(null, response)
}
client.query('SELECT NOW()', (err, res) => {
if (err) {
console.log('Database ' + err)
response.body = err
callback(null, response)
} else {
response.body = res
callback(null, response)
}
client.end()
})
}
if (process.env.USERNAME == 'ysg4206') {
this.handler(null, null, (_, txt) => {console.log(`callback: ${JSON.stringify(txt)}`)})
}
postgres
have access to that db? Because if the connection isn't timeout look like a db permission issue. – pepo