Your first choice should depend on anticipated access patterns, and how much data you're likely to be storing:...
- if there's never much data (n less than 30, say), an unsorted array will be fine;
- if you almost never add, delete, or update, a sorted array will be fine;
- if n is less than, say, 1 million, and you're only ever searching for the top element
(the one ranked first, or last), heaps will do well
(particularly if you are frequently updating elements chosen at random, as you do in
an LRU (least-recently-used) queue for a cache, say, because on average such an
update is O(1), rather than O(log(n)))
- if n is less than, say, 1 million, and you're not sure what you'll be searching
for, a balanced tree (say, red-black or AVL) will be fine;
- if n is large (1 million and up, say), you're probably better off with a b-tree or
a trie (the performance of balanced binary trees tends to "fall off a cliff" once n is big enough: memory accesses tend to be too scattered, and cache misses really start to hurt)
...but I recommend leaving the option as open as you can, so that you can benchmark at least one of the alternatives and switch to it, if it performs better.
Over the last twenty years, I've only worked on two applications where heaps were the best choice for anything (once for a LRU, and once in a nasty operations-research application, restoring additivity to randomly perturbed k-dimensional hypercubes, where most cells in the hypercube appeared in k different heaps and memory was at a premium) . However, on those two occasions, they performed vastly better than the alternatives: literally dozens of times faster than balanced trees or b-trees.
For the hypercube problem that I mentioned in the last paragraph, my team-lead thought red-black trees would perform better than heaps, but benchmarking showed that red-black trees were slower by far (as I recall, they were about twenty times slower), and although b-trees were significantly faster, heaps beat them comfortably too.
The important feature of the heap, in both the cases I mentioned above, was not the O(1) look-up of the minimum value, but rather the O(1) average update time for an element chosen at random.
-James Barbetti (Well, I thought I was. But captcha keeps telling me I'm not human)