2
votes

I'm currently designing a system for processing uploaded files.

The files are uploaded through a LAMP web frontend and must be processed through several stages some of which are sequential and others which may run in parallel.

A few key points:

  • The clients uploading the files only care about safely delivering the files not the results of the processing so it can be completely asynchronous.
  • The files are max 50kb in size
  • The system must scale up to processing over a million files a day
  • It is critical that no files may be lost or go unprocessed
  • My assumption is MySQL, but I have no issue with NoSQL if this could offer an advantage.

My initial idea was to have the front end put the files straight into a MySQL DB and then have a number of worker processes poll the database setting flags as they completed each step. After some rough calculations I realised that this wouldn't scale as the workers polling would start to cause locking problems on the upload table.

After some research it looks like Gearman might be the solution to the problem. The workers can register with the Gearman server and can poll for jobs without crippling the DB.

What I am currently puzzling over is how to dispatch jobs in the most efficient manner. There are three ways I can see to do this:

  • Write a single dispatcher to poll the database and then send jobs to Gearman
  • Have the upload process fire off an asynchronous Gearman job when it receives a file
  • Use the Gearman MySQL UDF extension to make the DB fire off jobs when files are inserted

The first approach will still hammer the DB somewhat but it could trivially recover from a failure. The second two approaches would seem to require enabling Gearman queue persistence to recover from faults, but I am concerned that if I enable this I will loose the raw speed that attracts me to Gearman and shift the DB bottleneck downstream.

Any advice on which of these approaches would be the most efficient (or even better real world examples) would be much appreciated.

Also feel free to pitch in if you think I'm going about the whole thing the wrong way.

2

2 Answers

2
votes

This has been open for a little while now so I thought I would provide some information on the approach that I took.

I create a gearman job every time a file is uploaded for a "dispatch" worker which understands the sequence of processing steps required for each file. The dispatcher queues gearman jobs for each of the processing steps.

Any jobs that complete write back a completion timestamp to the DB and call the dispatcher which can then queue any follow on tasks.

The writing of timestamps for each job completion means the system can recover its queues if processing is missed or fails without having to have the burden of persistent queues.

0
votes

I would save the files to disk, then send the filename to Gearman. As each part of the process completes, it generates another message for the next part of the process, you could move the file into a new work-in-process directory for the next stage to work on it.