short answer:
almost never
long answer:
Whenever you need to have a vector of char bigger that 2gb on a 32 bit system. In every other use case, using a signed type is much safer than using an unsigned type.
example:
std::vector<A> data;
[...]
size_t i = calc_index(param1, param2);
if( i - 1 < 0 ) {
return LEFT_BORDER;
} else if( i >= data.size() - 1 ) {
return RIGHT_BORDER;
}
return calc_something(data[i-1], data[i], data[i+1]);
The signed equivalent of size_t
is ptrdiff_t
, not int
. But using int
is still much better in most cases than size_t. ptrdiff_t
is long
on 32 and 64 bit systems.
This means that you always have to convert to and from size_t whenever you interact with a std::containers, which not very beautiful. But on a going native conference the authors of c++ mentioned that designing std::vector with an unsigned size_t was a mistake.
If your compiler gives you warnings on implicit conversions from ptrdiff_t to size_t, you can make it explicit with constructor syntax:
calc_something(data[size_t(i-1)], data[size_t(i)], data[size_t(i+1)]);
if just want to iterate a collection, without bounds cheking, use range based for:
for(const auto& d : data) {
[...]
}
here some words from Bjarne Stroustrup (C++ author) at going native
For some people this signed/unsigned design error in the STL is reason enough, to not use the std::vector, but instead an own implementation.