2
votes

A quick question about work balancing.

Program processing files in parallel. Lets say size of the file is an approximated measure of how long it will take to process it. All files are known in advance.

We have N nodes that can work on files. How to distribute these files so that each node will have closest to avg amount of work.

Idea is pretty trivial and I have couple of ideas, but it really seems like some classical problem with best solution already in existence.
I just don't know what it called.

Someone knows this ?

Thanks!

EDIT: Ok, sorry, I omitted a lot of information. I am working on MPI implementation. Standard master-slave system. One master node examines target directory, picking up files that needs to be processed, and then assign files to slaves MPI tasks so they can do their part in parallel.

Amount of slave nodes is less than 32.
Amount of target files less than 10000.

4
What are "nodes"? How are nodes connected? How do files come to nodes for processing? What is the cost of distribution, compared to cost of processing?Dialecticus
I think the question is poorly defined. How many nodes? Unboundedly many? Up to M? If so, is it better to balance each workload as much as possible, or to use all M nodes for the best total time?Patrick87
Sounds like backpacker problem.user unknown
Surely the limiting factor for running time is mean file size, not number of files?Chris Mowforth

4 Answers

2
votes

You are asking about the classic mulitprocessor scheduling problem. The wikipedia article is a good start for a basic overview of an algorithm (http://en.wikipedia.org/wiki/Multiprocessor_scheduling).

1
votes

Here's a thought. Sort the (filename, size) pairs in descending order. Then, starting with the biggest file, assign each file to the node which has the lowest cumulative weight of files (break ties however you like). The "one for me, one for you" approach.

Takes O(MlogM) to sort M file records and O(M*N) to distribute (somebody double check this), but I think the algorithm gives good - optimal? - results.

Edit: after checking out the link provided by the other poster, it turns out this is the LPT approach, which is in P but which isn't optimal in terms of getting the average size as close as possible.

0
votes

I've been experimenting with parallelising reduction functions using recursive divide & conquer algorithms and have settled on having the number of jobs submitted to nodes satisfy the inequality

l >= n / P^2

where l is the number of jobs, n the size of the original workload and P the number of 'nodes' , workers, processors, whatever you want to call them.

For situations on most computers & mobile devices n will be many orders of magnitude greater than P. You want to make sure that the number of individual jobs isn't so large that you spend all your time dividing them up and sending them to workers.

0
votes

If all work units lengths are known (estimatable) in advance, this basically becomes the bin packing problem (https://en.wikipedia.org/wiki/Bin_packing_problem). This is solvable heuristically with "first fit" and "best fit" algorithms (see the link).