I use shared hosting and had a similar issue. If your hosting service accepts the php command shell_exec()
you could do this.
protected function schedule(Schedule $schedule)
{
if (!strstr(shell_exec('ps xf'), 'php artisan queue:work'))
{
$schedule->command('queue:work --timeout=60 --tries=1')->everyMinute();
}
}
Your cron job seems ok. By the way, if your hosting server is 24h down, you may consider another host my friend.
queue:work
is a long running process. This check ensures it's running on your server. It will listens to your queue and does the job. It also means that if you make changes to your production files, the worker will not pick the changes up. Have a look at my top -ac
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2398733 user 20 0 466m 33m 12m S 0.0 0.1 0:03.15 /opt/alt/php72/usr/bin/php artisan queue:work --timeout=60 --tries=1
2397359 user 20 0 464m 33m 12m S 0.0 0.1 0:03.04 /usr/local/bin/php /home/user/booklet/artisan schedule:run
2398732 user 20 0 105m 1308 1136 S 0.0 0.0 0:00.00 sh -c '/opt/alt/php72/usr/bin/php' 'artisan' queue:work --timeout=60 --tries=1 >> '/home/user/booklet/storage/queue.log' 2>&1
As you can see, the worker is on top, another process simply writes everything it does to a log file. You have to kill 2398733
after making new uploads/changes to your prod server. The process will restart by itself in less than 5 minutes. Because of the schedule:run
cron job.
Update October 2019
protected function schedule(Schedule $schedule)
{
if (!strstr(shell_exec('ps xf'), 'php artisan queue:work'))
{
$schedule->command('queue:work --timeout=60 --tries=1')->withoutOverlapping();
}
}
The ->withoutOverlapping()
method pushes the process command in the background. It ensures that the artisan Schedule
command exits properly.