I have a collection of elements in a std::vector that are sorted in a descending order starting from the first element. I have to use a vector because I need to have the elements in a contiguous chunk of memory. And I have a collection holding many instances of vectors with the described characteristics (always sorted in a descending order).
Now, sometimes, when I find out that I have too many elements in the greater collection (the one that holds these vectors), I discard the smallest elements from these vectors some way similar to this pseudo-code:
grand_collection: collection that holds these vectors
T: type argument of my vector
C: the type that is a member of T, that participates in the < comparison (this is what sorts data before they hit any of the vectors).
std::map<C, std::pair<T::const_reverse_iterator, std::vector<T>&>> what_to_delete;
iterate(it = grand_collection.begin() -> grand_collection.end())
{
iterate(vect_rit = it->rbegin() -> it->rend())
{
// ...
what_to_delete <- (vect_rit->C, pair(vect_rit, *it))
if (what_to_delete.size() > threshold)
what_to_delete.erase(what_to_delete.begin());
// ...
}
}
Now, after running this code, in what_to_delete
I have a collection of iterators pointing to the original vectors that I want to remove from these vectors (overall smallest values). Remember, the original vectors are sorted before they hit this code, which means that for any what_to_delete[0 - n]
there is no way that an iterator on position n - m
would point to an element further from the beginning of the same vector than n
, where m > 0
.
When erasing elements from the original vectors, I have to convert a reverse_iterator to iterator. To do this, I rely on C++11's §24.4.1/1:
The relationship between reverse_iterator and iterator is &*(reverse_iterator(i)) == &*(i- 1)
Which means that to delete a vect_rit
, I use:
vector.erase(--vect_rit.base());
Now, according to C++11 standard §23.3.6.5/3
:
iterator erase(const_iterator position); Effects: Invalidates iterators and references at or after the point of the erase.
How does this work with reverse_iterators? Are reverse_iterators internally implemented with a reference to a vector's real beginning (vector[0]
) and transforming that vect_rit to a classic iterator so then erasing would be safe? Or does reverse_iterator use rbegin() (which is vector[vector.size()]
) as a reference point and deleting anything that is further from vector's 0-index would still invalidate my reverse iterator?
Edit:
Looks like reverse_iterator uses rbegin() as its reference point. Erasing elements the way I described was giving me errors about a non-deferenceable iterator after the first element was deleted. Whereas when storing classic iterators (converting to const_iterator
) while inserting to what_to_delete
worked correctly.
Now, for future reference, does The Standard specify what should be treated as a reference point in case of a random-access reverse_iterator? Or this is an implementation detail?
Thanks!
reverse_iterator
here.std::vector
has random-access iterators which means you can just use a regulariterator
starting from.end()
and move it backwards. This way, you don't have to do much magic to use.erase()
. – Michał Górnyreverse_iterator
even has a "reference point" in the sense you're talking about. It's just a convenience wrapper around a normal bidirectional/random access iterator; in particular, the invalidation semantics for areverse_iterator
are "thereverse_iterator
is valid if and only if the wrappedIterator
is valid." It seems to me that the main point of confusion is that actualIterator
wrapped by areverse_iterator
points to a position one later than thereverse_iterator
appears to point. – Managustd::map<C, std::pair<T::const_reverse_iterator, std::vector<T>&>>
when it seems that all the question is really about is whether or not areverse_iterator
into a vector gets invalidated when it's used to erase an element. Or am I simply not understanding the real question? – Michael Burr