6
votes

Just looking for a bit of direction, I realise that the example given is possible to solve using brute force iteration, but I am looking for a more elegant (ie. mathematical?) solution which could potentially solve significantly larger examples (say 20x20 or 30x30). It is entirely possible that this cannot be done, and I have had very little success in coming up with an approach which does not rely on brute force...

I have a matrix (call it A) which is nxn. I wish to select a subset (call it B) of points from matrix A. The subset will consist of n elements, where one and only one element is taken from each row and from each column of A. The output should provide a solution (B) such that the sum of the elements that make up B is the maximum possible value, given these constraints (eg. 25 in the example below). If multiple instances of B are found (ie. different solutions which give the same maximum sum) the solution for B which has the largest minimum element should be selected.

B could also be a selection matrix which is nxn, but where only the n desired elements are non-zero.

For example: if A =

|5 4 3 2 1|
|4 3 2 1 5|
|3 2 1 5 4|
|2 1 5 4 3|
|1 5 4 3 2|

=> B would be

|5 5 5 5 5|

However, if A =

|5 4 3|
|4 3 2|
|3 2 1|

B =

|3 3 3|

As the minimum element of B is 3 which is larger than for

|5 3 1|

or

|4 4 1|

which also both sum to 9

4
Sounds like it would be amenable to a dynamic programming solution in O(n^3) time, or so.Li-aung Yip
I have a dynamic programming algorithm for this that should run in O(n^3) time. I have to write it up and formally prove it's correct. It should have significantly better performance than a naive recursive solution... provided I can prove it works. ;)Li-aung Yip

4 Answers

6
votes

Your problem is almost identical to the Assignment problem, which can e.g. be solved by the Hungarian algorithm in polynomial time.

Note that the assignment problem is usually a minimization problem, but multiplying your matrix with -1 and adding some constant should make the method applicable. Further, there is no formal tie-braking condition, for case of multiple optimal solutions. However, the method yields you a solution having the optimal sum. Let m be the minimum summand. Modify your matrix by setting all entries less or equal to m to zero and solve again. Either you get a solution with the same sum that is better than the last one. If not, the previous solution was already optimal.

0
votes

As Matthias indicated you should use backtracking.

  1. Find a reasonable solution. Select max values from each row and see if they are non-overlapping. If not, then perturb part of the solution so that the result becomes non-overlapping.
  2. Define fitness of a partial solution. Let us say you are picking up value for each row iteratively and you have already picked values from first k rows. The fitness of this solution equals sum of the already picked values + max values from remaining rows and unselected columns
  3. Now recursively start searching for solution. Select the values from first row, calculate their fitness and insert them into a priority queue. Remove all the solutions whose fitness is lower than the current optimal solution (initialized in step 1). Pick the solution at the head of the queue, calculate the next level of solutions and insert them back to the priority queue. Once you have selected values from all columns and rows, calculate the sum, and if it is higher than current optimal, replace it.
0
votes

Ouch. This algorithm is wrong; there is no proof because it's wrong and therefore it's impossible to prove that it's correct. ;) I'm leaving it here because I'm too attached to delete it entirely, and it's a good demonstration of why you should formally prove algorithms instead of saying "this looks right! There's no possible way this could fail to work!"


I'm giving this solution without proof, for the time being. I have a proof sketch but I'm having trouble proving optimal substructure for this problem. Anyway...

Problem

Given a square array of numbers, select as many "non-overlapping" numbers as possible so that the sum of the selected numbers is maximised. "Non-overlapping" means that no two numbers can be from the same row or the same column.

Algorithm

  • Let A be a square array of n by n numbers.
  • Let Aij denote the element of A in the ith row and jth column.
  • Let S( i1:i2, j1:j2 ) denote the optimal sum of non-overlapping numbers for a square subarray of A containing the intersection of rows i1 to i2 and columns j1 to j2.

Then the optimal sum of non-overlapping numbers is denoted S( 1:n , 1:n ) and is given as follows:

S( 1:n , 1:n ) = max {  [ S(   2:n , 2:n   ) + A11 ]
                        [ S(   2:n , 1:n-1 ) + A1n ]
                        [ S( 1:n-1 , 2:n   ) + An1 ]
                        [ S( 1:n-1 , 1:n-1 ) + Ann ] }
(recursively)

Note that S( i:i, j:j ) is simply Aij.

That is, the optimal sum for a square array of size n can be determined by separately computing the optimal sum for each of the four sub-arrays of size n-1, and then maximising the sum of the sub-array and the element that was "left out".

S for |# # # #|
      |# # # #|
      |# # # #|
      |# # # #|

Is the best of the sums S for:

|#      |      |      #|      |# # #  |       |  # # #|
|  # # #|      |# # #  |      |# # #  |       |  # # #|
|  # # #|      |# # #  |      |# # #  |       |  # # #|
|  # # #|      |# # #  |      |      #|       |#      |

Implementation

The recursive algorithm above suggests a recursive solution:

def S(A,i1,i2,j1,j2):
    if (i1 == i2) and (j1==j2):
        return A[i1][j1]
    else:
        return max ( S( A, i1+1, i2, j1+1, j2) + A[i1][j1] ],
                     S( A, i1+1, i2, j1, j2-1) + A[i1][j2] ],
                     S( A, i1, i2-1, j1+1, j2) + A[i2][j1] ],
                     S( A, i1, i2-1, j1, j2-1) + A[i2][j2] ], )

Note that this will make O(4^n) calls to S()!! This is much better than the factorial O(n!) time complexity of the "brute force" solution, but still awful performance.

The important thing to note here is that many of the calls are repeated with the same parameters. For example, in solving a 3*3 array, each 2*2 array is solved many times.

This suggests two possible solutions for a speedup:

  • Make the recursive function S() cache results so that it only needs to S(A,i1,i2,j1,j2) once for each i1,i2,j1,j2. This means that S() only needs to calculate O(n^3) results - all other requests will be fufilled from cache. (This is called memoising.)
  • Instead of starting at the top, with the large n*n array, and working down through successively smaller subproblems, start at the bottom with the smallest possible subproblems and build up to the n*n case. This is called dynamic programming. This is also O(n^3), but it's a much faster O(n^3) because you don't have to hit a cache all the time.

The dynamic programming solution proceeds somewhat like:

  • Find optimal solutions to all 1x1 sub-arrays. (Trivial.)
  • Find optimal solutions for all 2x2 sub-arrays.
  • Find optimal solutions for all 3x3 sub-arrays.
  • ...
  • Find optimal solutions for all n-1 * n-1 sub-arrays.
  • Find optimal solutions for the complete n*n sub-array.

Notes on this solution:

  • No proof yet. I'm working on it.
  • You'll note the algorithm above only gives you S(), the optimal sum. It doesn't tell you which numbers actually make up that sum. You get to add in your own method of backtracing your path to the solution.
  • The algorithm above doesn't guarantee the property that ties like 2,2 vs. 1,3 will be broken in favour of having all the individual numbers be as large as possible (so that 2,2 wins.) I believe you can define max() to break ties in favour of the largest numbers possible, and that will do what you want, but I can't prove it.

General notes:

Dynamic programming is a powerful technique for devising fast algorithms for any problem which exhibits two properties:

  1. Optimal substructure: A problem can be broken down into slightly smaller parts, each of which can be used as part of the solution to the original problem.
  2. Overlapping subproblems means that there are few actual subproblems to solve, and the solutions to the subproblems are re-used many times.

If the problem has optimal substructure, and the problem breaks down into slightly smaller problems - say a problem of size n breaks down into subproblems of size n-1 - then the problem can be solved by dynamic programming.

If you can split the problem into much smaller chunks - say chopping a problem of size n into halves, each of size n/2 - that's divide and conquer, not dynamic programming. Divide and conquer solutions are generally very fast - for example binary search will find an element in a sorted array in O(log n) time.

-1
votes

This is related to the n Queens problem, except that you do not care about the diagonal and you have weighted solutions. As the Queens problem, you can solve it by (multiple) backtracking.

I.e., once you find a solution you remember its weight, mark the soulution as invalid, and start over. The (a) solution with the highest weight wins.