4
votes

There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping subproblems [1]. For this question, we going to focus on the latter property only.

There are various definitions for overlapping subproblems, two of which are:

  • A problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times OR a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems [2].
  • A second ingredient that an optimization problem must have for dynamic programming to apply is that the space of subproblems must be "small" in the sense that a recursive algorithm for the problem solves the same subproblems over and over, rather than always generating new subproblems (Introduction to Algorithms by CLRS)

Both definitions (and lots of others on the internet) seem to boil down to a problem having overlapping subproblems if finding its solution involves solving the same subproblems multiple times. In other words, there are many small sub-problems which are computed many times during finding the solution to the original problem. A classic example is the Fibonacci algorithm that lots of examples use to make people understand this property.

Until a couple of days ago, life was great until I discovered Kadane's algorithm which made me question the overlapping subproblems definition. This was mostly due to the fact that people have different views on whether or NOT it is a DP algorithm:

The most compelling reason why someone wouldn't consider Kadane's algorithm a DP algorithm is that each subproblem would only appear and be computed once in a recursive implementation [3], hence it doesn't entail the overlapping subproblems property. However, lots of articles on the internet consider Kadane's algorithm to be a DP algorithm, which made me question my understanding of what overlapping subproblems means in the first place.

People seem to interpret the overlapping subproblems property differently. It's easy to see it with simple problems such as the Fibonacci algorithm but things become very unclear once you introduce Kadane's algorithm for instance. I would really appreciate it if someone could offer some further explanation.

1
I don't think you should concern yourself too much with whether a given algorithm qualifies as "dynamic programming" or not. "Dynamic programming" is helpful as a paradigm to design algorithms. It is not useful as a label to put on existing algorithms. You cite the Fibonacci sequence as an example. Many people would disagree on whether an implementation of Fibonacci which only keeps the previous two values in memory qualifies as "dynamic programming" or not.Stef

1 Answers

5
votes

You've read so much about this already. The only thing I have to add is this:

The overlapping subproblems in Kadane's algorithm are here:

max_subarray = max( from i=1 to n [ max_subarray_to(i) ] )

max_subarray_to(i) = max(max_subarray_to(i-1) + array[i], array[i])

As you can see, max_subarray_to() is evaluated twice for each i. Kadane's algorithm memoizes these, turning it from O(n2) to O(n)

... But as @Stef says, it doesn't matter what you call it, as long as you understand it.