I have a Laravel queued job which extracts links from a webpage. The timeout for the Queue listener configured through Laravel Forge is 240 seconds (4 minutes). However, jobs are taking up to 45 minutes to run.
My queue settings are:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 350,
],
And there are multiple job processes running - up to 35 processes. As you can imagine, this is eating up a lot of server memory. The processes just seem to hanging around. The command for these processes as shown in top
is:
php7.1 artisan queue:work redis --once --queue=linkqueue --delay=0 --memory=128 --sleep=10 --tries=1 --env=local
How can a job run for 45 minutes if the timeout is 240 seconds? Why are there so many processes - shouldn't there just be one?
Also, any ideas why a script for extracting links should take 45 minutes to run?!
The script does work, that is, in most cases it runs as expected - it just takes ages. There are no errors reported/logged as far as I can see.
Code in the job is:
$dom = new DOMDocument;
$dom->loadHTML($html);
$links = $dom->getElementsByTagName('a');
foreach ($links as $a) {
$link = $a->getAttribute('href');
$newurl = new URL;
$newurl->url = $link;
$newurl->save();
}
Update: Another simple job runs just fine, in under a second. It is specifically just the link job above that is taking 10s of minutes. Could it be a RAM issue or something? Is there anything else I can do to diagnose the problem? When run as part of a console job, the extract links function itself runs in 1 or 2 seconds. It is only on the queue that it freaks out.