You have an array A[]
of size (1 ≤ N ≤ 10^5). For each of i = 0, 1, 2, ..., N - 1
, we want to determine the number of inversions in the array if all entries greater than i
are decreased to i
.
An inversion is defined as two entries
A[i]
andA[j]
whereA[i] > A[j]
and i < j.
Example:
A[] = {3, 2, 1, 5, 2, 0, 5}
i = 0: {0, 0, 0, 0, 0, 0, 0} Inversions: 0
i = 1: {1, 1, 1, 1, 1, 0, 1} Inversions: 5
i = 2: {2, 2, 1, 2, 2, 0, 2} Inversions: 7
i = 3: {3, 2, 1, 3, 2, 0, 3} Inversions: 10
i = 4: {3, 2, 1, 4, 2, 0, 4} Inversions: 10
i = 5: {3, 2, 1, 5, 2, 0, 5} Inversions: 10
i = 6: {3, 2, 1, 5, 2, 0, 5} Inversions: 10
So your output would be:
0
5
7
10
10
10
10
I know how to find the number of inversions in an array through MergeSort in O(NlogN). However, if I was to explicitly generate every array for each value of i
, it would be an O(N^2logN) algorithm which wouldn't pass in time.
One observation I made was that the inversions increase as i
increases. This makes sense because when all entries are 0
, there will be no inversions (as it is sorted), but as you keep increasing the maximum entry value, the entry can become larger than entries that previously were of the same value.
So you could start with an A[]
with only 0s, and keep increasing i
. You can use your answer for previous values of i
to determine the answer for larger values of i
. Still, if you scanned through each array you would still get an O(N^2) algorithm.
How can I solve this problem?