I'm having trouble with understanding the following property of divide-and-conquer algorithms.
A recursive method that divides a problem of size
Ninto two independent (nonempty) parts that it solves recursively calls itself less thanNtimes.
The proof is
A recursive function that divides a problem of size
Ninto two independent (nonempty) parts that it solves recursively calls itself less thanNtimes. If the parts are one of sizekand one of sizeN-k, then the total number of recursive calls that we use isT(n) = T(k) + T(n-k) + 1, forN>=1withT(1) = 0. The solutionT(N) = N-1is immediate by induction. If the sizes sum to a value less thanN, the proof that the number of calls is less thanN-1follows from same inductive argument.
I perfectly understand the formal proof above. What I don't understand is how this property is connected to the examples that are usually used to demonstrate the divide-and-conquer idea, particularly to the finding the maximum problem:
static double max(double a[], int l, int r)
{
if (l == r) return a[l];
int m = (l+r)/2;
double u = max(a, l, m);
double v = max(a, m+1, r);
if (u > v) return u; else return v;
}
In this case when a consists of N=2 elements max(0,1) will call itself 2 more times, that is max(0,0) and max(1,1), which equals to N. If N=4, max(0,3) will call itself 2 times, and then each of the subsequent calls will also call max 2 times, so the total number of calls is 6 > N. What am I missing?