41
votes

I have a simple lambda function that asynchronously makes an API calls and then returns data. 99% of the time this works great. When ever the API takes longer then the lambda configured timeout, it gives an error as expected. Now the issue is that when I make any subsequent calls to the lambda function it permanently gives me the timeout error.

 "errorMessage": "2016-05-14T22:52:07.247Z {session} Task timed out after 3.00 seconds"

In order to test that this was the case I set the lambda timeout to 3 seconds and have a way to trigger these two functions within the lambda.

Javascript

function now() { 
    return response.tell('success'); 
}

function wait() {
    setTimeout(function() { return response.tell('success'); }, 4000);
}

When I call the now function there are no problems. When I call the wait function I get the timeout error and then any subsequent calls to now give me the same error.

Is this an expected behavior? I would think that any subsequent calls to the lambda function should work. I understand I can always increase the configuration timeout, but would rather not.

6
do you capture log ? anything in cloudwatchFrederic Henri
It's errorMessage I posted above over and over again.jjbskir
I've been playing around with the same problem. I put a console.log as the FIRST LINE of my index handler file, BEFORE I do any library imports. That console.log hits on the subsequent timeouts but no logs after it! It's halting at the imports. I don't know what AWS is doing, but it cant load external libs are failing (or it takes a LONG time) for some reason.duhseekoh
similar question I've mostly seen people asking about Node.js, but it's a problem in Python (3.6, at least) as well. Same thing @duhseekoh experienced: printing from the first line returns nothing.Brett Beatty

6 Answers

20
votes

You should look for how your function handle works with a specific context.callbackWaitsForEmptyEventLoop

If that boolean-type is false, the setTimeout won't be ever fired, because you might've answered/handled the lambda invocation earlier. But if the value of callbackWaitsForEmptyEventLoop is true - then your code will do what you are looking for.

Also - it's probably easier to handle everything via callbacks directly, without the need for "hand-written" timeouts, changing configuration timeouts and so on...

E.g.

function doneFactory(cb) { // closure factory returning a callback function which knows about res (response)
  return function(err, res) {
    if (err) {
      return cb(JSON.stringify(err));
    }
    return cb(null, res);
  };
}

// you're going to call this Lambda function from your code
exports.handle = function(event, context, handleCallback) {

  // allows for using callbacks as finish/error-handlers
  context.callbackWaitsForEmptyEventLoop = false;

  doSomeAsyncWork(event, context, doneFactory(handleCallback));
};
10
votes

well if you defined 3 seconds in your function configuration, this timeout will override the time inside your code, so make sure to increase the timeout from your lambda function configs and try again the wait() and it should work!

6
votes

I've run into the same issue, in fact there are many cases when Lambda becomes unresponsive, e.g.:

  1. Parsing not valid json:

    exports.handler = function(event, context, callback)
    {
        var nonValidJson = "Not even Json";
        var jsonParse = JSON.parse(nonValidJson);
    
  2. Accessing property of undefined variable:

    exports.handler = function(event, context, callback)
    {
        var emptyObject = {};
        var value = emptyObject.Item.Key;
    
  3. Not closing mySql connection after accessing RDS leads to Lambda timeout and then it becomes non-responsive.

When I'm saying unresponsive it's literally not even loading, i.e. first print inside handler isn't printed, and Lambda just exits every run with timeout:

exports.handler = function(event, context, callback)
{
    console.log("Hello there");

It's a bug, known by AWS team for almost a year:
https://forums.aws.amazon.com/thread.jspa?threadID=238434&tstart=0

Unfortunately it's still not fixed, after some tests it's revealed that in fact Lambda tries to restart (reload the container?), there is just not enough time. If you set the timeout to be 10s, after ~4s of execution time Lambda starts working, and then in next runs comes to behave normally. I've also tried playing with setting:

context.callbackWaitsForEmptyEventLoop = false;

and putting all 'require' blocks inside handler, nothing really worked. The only way to prevent Lambda becoming dead is setting bigger timeout, 10s should be more than enough as a workaround protection against this bug.

4
votes

In Amazon console AWS config you have to change the default timeout from 3 seconds to more (5 min max)

1
votes

I think the problem is because of ip address we mention it in the AWS RDS inbound/ outbound.

If you are testing for now and your node.js is working on local ide and not on AWS, then you have to do following:

  1. Go to AWS RDS.

  2. Click on DB instance.

  3. Click the name of that DB instance.
  4. Go to "Connect" section below, where you can find Security group roles.
  5. The type of the security groups will be inbound, outbound.
  6. Click on both one by one. It will open a new window.
  7. Again there will be two tabs for inbound and outbound.
  8. Click on both one after the other.
  9. Click "Edit".
  10. Select "Anywhere" instead of "Custom". P.S. Repeat for both inbound/outbound.

All Set.

0
votes

I just had to increase the timeout and the error is subsided. I increased it to 5 sec. This was okay for me because, I wasn't gonna use this Lambda in production.