6
votes

I learned the complexity of deque::insert() from the C++ standard 2003 (chapter 23.2.1.3) as follows:

In the worst case, inserting a single element into a deque takes time linear in the minimum of the distance from the insertion point to the beginning of the deque and the distance from the insertion point to the end of the deque.

I always understand the implementation of stl deque as a collection of memory chunks. Hence an insertion will only affect the elements in the same memory chunk as the insertion position. My question is, what does the standard mean by "linear in the minimum of the distance from the insertion point to the beginning of the deque and the distance from the insertion point to the end of the deque"?

My understanding is because C++ standard does not enforce a certain implementation of deque. The complexity is just in general for the worst case. However, in the actual implementation in compilers, it is linear to the number of elements in a memory chunk, which may vary for different element sizes.

Another guess might be that, since insert() will invalidate all iterators, deque needs to update all iterators. Therefore it's linear.

4

4 Answers

8
votes

std::deque is normally (always?) implemented as a collection of chunks of memory, but it won't normally insert a whole new chunk just for you to insert one new element in the middle of the collection. As such, it'll find whether the insertion point is closer to the beginning or end, and shuffle the existing elements to make room for the new element in an existing chunk. It'll only add a new chunk of memory at the beginning or end of the collection.

3
votes

I think you'd be better served by a diagram... let's play with ASCII art!

A deque is usually an array of memory chunks, but all apart the front and back memory chunks are full. This is necessary because a deque is a RandomAccessContainer, and to get O(1) access to any container, you cannot have an unbounded number of containers from which to read the size:

bool size() const { return first.size + (buckets.size()- 2) * SIZE + last.size; }

T& operator[](size_t i) {
  if (i < first.size) { return first[SIZE - i]; }

  size_t const correctedIndex = i - first.size;
  return buckets[correctedIndex / SIZE][correctedIndex % SIZE];
}

Those operations are O(1) because of the multiplication/division!

In my example, I'll suppose that a memory chunk is full when it contains 8 elements. In practice, nobody said the size should be fixed, just that all inner buckets shall have the same size.

 // Deque
 0:       ++
 1: ++++++++
 2: ++++++++
 3: ++++++++
 4: +++++

Now say that we want to insert at index 13. It falls somewhere in the bucket labelled 2. There are several strategies we can think about:

  • extend bucket 2 (only)
  • introduce a new bucket either before or after 2 and shuffle only a few elements

But those two strategies would violate the invariant that all "inner" buckets have the same number of elements.

Therefore we are left with shuffling the elements around, either toward the beginning or the end (whichever is cheaper), in our case:

 // Deque
 0:      +++
 1: ++++++++
 2: +O++++++
 3: ++++++++
 4: +++++

Note how bucket 0 grew.

This shuffle implies that, in the worst case, you'll move half the elements: O(N/2).

deque has O(1) insert at either the beginning or the end though, because there it's just a matter of adding the element in the right place or (if the bucket is full) creating a new bucket.

There are other containers that have better insert/erase behavior at random indices, based on B+ Trees. In an indexed B+ Tree you can, instead of a "key" (or in parallel) maintain internally a count of the elements prior to a certain position. There are various technics to do this efficiently. At the end you get a container with:

  • O(1): empty, size
  • O(log N): at, insert, erase

You can check the blist module in Python which implements a Python list-like element backed by such a structure.

2
votes

Your conjecture are ... 99.9% true. All depends on what the actual implementation is. What the standard specifies are the minimum requirement for both the implementors (that cannot claim to be standard if they don fit the specs) and users (that must not expect "better performances" if writing implementation independent code).

The idea behind the spec., is a chunk (a == one) of uninitialized memory where elements are allocated around the center... until there is space for them. Inserting in the middle means shift. Inserting at front or end means just construct in place. (when no space exist, a reallocation is done) Indexes and iterators cannot be trusted after a modification, since we cannot assume what has been shifted and in which direction.

More efficient implementation don't use a single chunk, but multiple chunk to redistribute the "shifting" problem and to allocate memory in constant size from the underlying system (thus limiting reallocation and fragmentation). If you're targeting one of them you can expect better performances, otherwise yo had better not to assume any structure optimization.

1
votes

Linear on the number of elements inserted (copy construction). Plus, depending on the particular library implemention, additional linear time in up to the number of elements between position and one of the ends of the deque. Reference...