2
votes

I am running gearman worker to distribute background jobs to multiple workers. To monitor those background jobs and restaart upon crash, we are using supervisord as the process management system.

The gearman worker code is pretty much simple as official example:

$worker = new GearmanWorker();
$worker->addServer($config["gearman.host"],$config["gearman.port"]);
$worker->addFunction("config_job", "run_config_job");

while ($worker->work());

For the workers, I was expecting, during a job execution the CPU usage will be high, after completion if will become low during the waiting time. But interestingly, for long running processes, it contains increased CPU usage over time.

Does anyone has any idea what the main reason behind incremental CPU usage over time is?

Also, as the tasks are run on aws ec2 small instances, how many workers can effectively run in parallel in a single workers-only instance on average?

1

1 Answers

2
votes

PHP is not particularly well designed for this use case, It may be prudent to restart the workers on a regular interval. If you not completely cleaning up after each job, you may be running into what is essentially a memory leak.

Small instances have a single core, which means that they will most effectively run one job at a time. However, if your jobs need to wait for any reason, like an API response, you may be able to have several jobs running at the same time.