3
votes

I'm studying for an algorithm exam and I cant find a way to solve the next problem:

INPUT: Positive integers r1,r2....rn and c1,c2....cn

OUTPUT: An n by n matrix A with 0/1 entries such that for all i the sum of the i-th row in A is ri and the sum of the i-th column in A is ci, if such a matrix exists. Consider the following greedy algorithm that constructs A row by row. Assume that the first i-1 rows have been constructed. Let aj be the number of 1's in the j-th column in the first i-1 rows. The ri columns with maximum cj-aj are assigned 1's in row , and the rest of the columns are assigned 0's. That is, the columns that still needs the most 1's are given 1's. Formally prove that this algorithm is correct using an exchange argument.

So what I tried to do as with most greedy problems I encountered is to wrap it up in an induction, assume that the rows up to the i-th row in the greedy solution and the optimal solution are the same and then try to change the i+1-th row so it will match the greedy and prove that it wont create a less optimal solution to the optimal given. then keep it up for k-i iterations till the entire solution is similar.

After trying that unsuccessfully I tried the same idea only per coordinate assume the ij coordinate is the first one that does not match and again failed.

Then I tried a different approach assuming that I have a set of steps in the greedy and swap a whole step of the algorithm (which is basically the same idea as the first one) and still I did not manage to find a way in which it is guaranteed that I did not create a less optimal solution.

thanks in advance for any assistance.

1

1 Answers

2
votes

Consider some inputs r and c and a matrix S that satisfies them.

Throw away the contents of the last row in S to get a new matrix S(n-1). If we fed S(n-1) to the greedy algorithm and asked it to fill in this last row, it'd obviously recover a satisfying solution.

Well, now throw away the contents of the last two rows in S to get S(n-2). Since we know a satisfying solution exists, we know that there are no j such that c(j) - a(j) > 2, and the number of j such that c(j)-a(j) == 2 is smaller than r(n-1). It follows that the greedy algorithm will set A[n-1, j] = 1 for the latter set of j, along with some other j for which c(j)-a(j) = 1. But because we know there's a satisfying solution, it must be that the number of j with c(j) - a(j) == 1 after the n-1th row is filled is exactly r(n), and is hence satisfiable.

Anyway, we can extend this downwards: in S(n-k-1) it must be the case that:

  • there aren't any j such that c(j) - a(j) > k+1
  • there can be at most r(n-k) many j such that c(j) - a(j) = k+1,
  • and Sum(c(j) - a(j)) == Sum(r(i), i >= n-k)

So after the greedy algorithm has processed the n-kth row, it must be the case that

  • there aren't any j such that c(j) - a(j) > k
  • there can be at most r(n-k+1) many j such that c(j) - a(j) = k,
  • and Sum(c(j) - a(j)) == Sum(r(i), i >= n-k+1)

Hence when k = 0, there aren't any j such that c(j) - a(j) > 0