3
votes

I was just reading the following question: Radix sort most significant first or least significant, which is faster?

And the author of the accepted answer was suggesting that the MSD radix sort is indeed faster. I do not see why however.

I have implemented both LSD and MSD (binary based by doing shift operations), LSD is iterative, requires just one bucket array, while MSD is recursive and requires one individual bucket array per every recursion call.

If you create a random array of 10 million integers, I cannot see how MSD will be faster than LSD, since you will be allocating extra bucket arrays every time you re enter your function and also you will have to face the overhead of the recursion call themselves.

I can see how the combination of MSD and LSD can give an over all boost (run MSD for the first few bits and LSD for the rest of the bits to reduce cache misses), but how is MSD alone expected to be more efficient than LSD given its recursive nature and the fact that you need a new bucket array for every recursion call, unlike LSD which is both iterative and just requires one bucket array for the entire sorting procedure?

2
You can refactor your code to avoid these needless allocations and deallocations. Note that which one is faster is highly implementation-dependent; I can implement both of them so that the MSD sort is faster or so that the LSD sort is faster.tmyklebu
and how can you avoid doing re allocations since you need the bucket array in every recursion call to find the groups that you will recurse into later on? Using a global bucket array or sending copies by reference to the next recursion calls, will in both situations alter your bucket array and thus make it impossible to find the next groups in the previous recursion calls.jsguy
You only need one bucket array per level. That's logarithmically many; there's no reason to allocate and free one per call.tmyklebu
this is true........... right now I am using one bucket for every recursion call. This will require nxbucketSize space, which is very bad compared to bucketSizexlogn space...jsguy
actually I thought about this and I am still confused. How would you use one bucket array per level? Is there any reference for this? When you go from one level to the next one, you have a bunch of groups that you will be recursing into. In each group you will have examine to the next j bits, each such sequence of j bits could point anywhere in the bucket array. So, how can you make sure that different groups do not collide with each other? I am using the following implementation as provided by @lcgldr, stackoverflow.com/questions/28726933/…jsguy

2 Answers

5
votes

Answer

The number of iterations in MSD radix depends on the input size, whereas the number of iterations in LSD radix sort depends on the key length. This often leads to MSD radix sort requiring significantly fewer iterations than LSD radix sort and is therefore faster.

Memory allocations are not an issue, as MSD radix sort can easily be implemented in-place.

Rationale

I've made an implementation for both LSD and MSD radix sort, so I could see what properties they have that makes MSD radix sort faster than LSD radix sort.

I've compared their speeds with std::sort on an array of 100.000.000 random positive 63-bit integers (I used std::sort's result I also used for verifying the sorted arrays) and got the following results:

  • Pure LSD sort : 10.5s
  • std::sort : 9.5s
  • Pure MSD sort : 9.3s
  • MSD sort + insertion_sort : 7.6s

So, it is slightly faster than std::sort, and if the leaves are sorted with insertion_sort, it is quite a bit faster.

Why might MSD radix sort be faster than LSD radix sort?

  • There is the cache-locality, though I doubt whether this is really important, as LSD radix sort also scans through the array, instead of performing random access.
  • MSD radix sort can be implemented such that its space complexity is O(d k), and thus only depends on the radix d and the length of the items k. This can be allocated on the stack, which is almost free. Hence it is basically an in-place sorting algorithm.
  • The bottom layers can be pruned. I.e. when a bucket contains only 1 element, it is already sorted and thus does not need to recurse on that bucket. Hence, MSD radix sort only needs to perform approximately log(n)/log(d) iterations. While LSD radix sort always must perform k iterations.

I believe that this last point is the reason why MSD radix sort is often faster than LSD radixsort. If the input data is uniformly random distributed, then the expected running time is O(n log(n)/log(d)), whereas LSD radix sort's running time is O(n k). And usually n is a lot smaller than k^d. Only if n = o(k^d), LSD radix sort would be faster. However, in that case counting sort (radix sort with k=1) can be used as well.

The implementations

inline void insertion_sort(int64_t * array, int n) {
  for (int i=1; i<n; i++) {
    int64_t val = array[i];
    int j = i;
    while (j>0 && array[j-1] > val) {
      array[j] = array[j-1];
      j--;
    }
    array[j] = val;
  }
}

void msd_sort(int64_t * array, int n, int64_t bit=60) {
  const int64_t mask = INT64_C(7);
  // Count bucket sizes
  int count[9]={};
  for (int i=0; i<n; i++) {
    count[((array[i]>>bit) & mask)+1]++;
  }
  // Create holes.
  int loc[8];
  int64_t unsorted[8];
  int live = 0;
  for (int i=0; i<8; i++) {
    loc[i] = count[i];
    count[i+1]+=count[i];
    unsorted[live] = array[loc[i]];
    if (loc[i] < count[i+1]) {
      live++;
    }
  }
  live--;
  // Perform sort
  for (int i=0; i<n; i++) {
    int64_t val = unsorted[live];
    int64_t d = (val>>bit) & mask;
    array[loc[d]] = val;
    loc[d]++;
    unsorted[live] = array[loc[d]];
    if (loc[d] == count[d+1]) {
      live--;
    }
  }
  if (bit>0) {
    for (int i=0; i<8; i++) {
      n = count[i+1] - count[i];
      if (n > 20) { // If replaced by n > 1, insertion_sort is not needed.
        msd_sort(array + count[i], n, bit-3);
      } else {
        insertion_sort(array + count[i], n);
      }
    }
  }
}

void lsd_sort(int64_t * array, int n) {
  const int64_t mask = INT64_C(7);
  std::vector<int64_t> buffer(n);
  for (int64_t bit=0; bit<63; bit+=3) {
    // Copy and count
    int count[9]={};
    for (int i=0; i<n; i++) {
      buffer[i] = array[i];
      count[((array[i]>>bit) & mask) + 1]++;
    }
    // Init writer positions
    for (int i=0; i<8; i++) {
      count[i+1]+=count[i];
    }
    // Perform sort
    for (int i=0; i<n; i++) {
      int64_t val = buffer[i];
      int64_t d = (val>>bit) & mask;
      array[count[d]] = val;
      count[d]++;
    }
  }
}
1
votes

The question you are refering to is performing the sort on a single bit only. For that reason, each recursion splits the array into two groups only. Thus, there is not much that you actually have to store when recursing.

The smaller group you descend - the bigger group, you put on a stack for a later execution. In the worst case scenario, you will have at most w elements in the stack, where w is the width (in bits) of your data type.

Now, having shown that recursion is not that bad in this case, the argumentation of Niklas B. why MSD has a chance to run faster (i.e. cache locality) stands.