August 17, 2017 The Fun and Interesting Radix Sort
Sorting algorithms are the backbone of any Data Science & Algorithm course. Most of us are pretty familiar with QuickSort, Mergesort, and so on. Yet there is one algorithm that is both simple and powerful that I find many engineers don’t know very well – Radix Sort.
Imagine the following situation – you have a million users and they have been ‘ranked’ by some measure of importance (interest in your product/amount of interaction/connectedness in network/…). Now you’d like to sort them by zip code. How can you best do this? Well, we basically need to re-sort the entire list and we can do it in two ways:
- Use a comparison function that compares first on zip and then on importance
OR
- Sort by zip and use a stable sorting algorithm that maintains the order.
Now bottom-up MergeSort is stable, so we can use that. But it seems like we aren’t really using the fact that it is sorted to give us any benefit. Can we do better? It turns out we can!
Enter Radix Sort
MergeSort is a comparison sort algorithm. What this means is that it needs to compare the elements in order to sort them. It can be shown that any comparison sort cannot do better than O(n*logn) which is the order of a good MergeSort as well. However, what if we didn’t need to compare members? Could we do better? Is it even possible? It is – and this is where Radix Sort comes in. At it’s core the idea is simple – all we need to do is treat each element value as a position in an array. Then if we walk the array the elements will be sorted! For example, if the zip codes were:
7, 6, 9, 1
We could put them in an array that looked like this:
1 _ _ _ _ 6 7 _ 9
And, if we simply walked the array – it’s sorted!
Problem 1: Zip Codes aren’t unique
The first problem we encounter is that zip codes aren’t unique! What we are going to get as the user’s zip codes would look more like this:
7, 7, 6, 9, 7, 6, 6, 1, 9...
So what we really need to do is keep track of the number of times each zip is found. This leads us to the following simple idea:
- Create an array of “positions” for each zip code.
- Walk the zip codes and, for each, increment the number of times it occurs as a key into this array.
- Walk the array itself and now calculate the ending position for each zip code ‘run’ by adding simply adding the previous elements.
- Walk zip codes in reverse order and place each in the ending positions calculated. Decrement the ending position to mark it as used.
(Notice, in step 4, we need to walk in reverse order otherwise the sort will not be stable.)
This algorithm is actually known as Counting Sort and can be implemented as follows:
void counting_sort(int *sorted, int* zipcodes, int len) { int position[10] = { 0 }; /* count zip codes */ for(int i = 0;i < len;i++) { position[zipcodes[i]]++; } /* calculate ending positions */ for(int i = 1;i < 10;i++) { ndx[i] += ndx[i-1]; } /* put zips in sorted positions * by using the ending positions * calculated and decrementing * them as used */ for(int i = len-1;i >= 0;i--) { sorted[position[zipcodes[i]]-1] = zipcodes[i]; position[zipcodes[i]]--; } }
We will use Counting Sort as a stepping stone to build the full Radix Sort in the next step.
Problem 2: Enormous position arrays
Another problem we encounter is the range of zip codes can be pretty big. In the UK the zip codes are a mix of letters and numbers 6 characters long. This leads to an array of positions of millions of items! Whoops!
Radix sort solves this problem by saying – what if we sorted just one digit at a time? Then the array size would never need to exceed 10 elements – one for each digit!
If we sort one digit at a time, we can sort starting with either the least significant digit (LSD) or the most significant digit (MSD). If we sort using the LSD, the sort will be stable, so let’s use LSD.
The final step in radix sort is trivially easy. Simply allow counting sort to focus on only one digit and iterate over all digits! For example:
/* We add 'exp' to tell us the digit to consider */ void counting_sort(int *sorted, int* zipcodes, int len, int exp) { ... for(int i = 0;i < len;i++) { int digit = (zipcodes[i]/exp) % 10; position[digit]++; } ... for(int i = len-1;i >= 0;i--) { int digit = (zipcodes[i]/exp) % 10; sorted[position[digit]-1] = zipcodes[i]; position[digit]--; } } void radix_sort(int* sorted, int* zipcodes, int len) { for(int exp = 1;exp < VALUE_RANGE;exp *=10) { counting_sort(sorted, zipcodes, len, exp); /* copy partly sorted zipcodes for next round */ copy(zipcodes, sorted); } }
Parallel Computing
Radix sort also has nice parallel computing properties. Because each element is sorted independent of other elements, we can split the input and have multiple parallel processes running to sort different portions.
Drawbacks
Radix Sort is therefore O(Rn) where R is the range of inputs. This means that it is useful only when we know the range of data is not large. If the range is large, say in the order log(n) we get back to the performance of MergeSort. If the range is larger we do worse!
The other disadvantage of Radix Sort is it needs extra memory.
Further Study
Radix Sort is an old algorithm and you can also look variants that use buckets, do in-place sorting, MSD, and trie sorting as they are also interesting though less useful.
I hope you enjoyed this post. Let me know in the forums and see you next time!
]]>
charles.lobo
Guest Blogger