The standard appears to provide two rules for distinguishing between implicit conversion sequences that involve user-defined conversion operators:
13.3.3 Best viable function [over.match.best]
[...] a viable function F1 is defined to be a better function than another viable function F2 if [...]
- the context is an initialization by user-defined conversion (see 8.5, 13.3.1.5, and 13.3.1.6) and the standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the entity being initialized) is a better conversion sequence than the standard conversion sequence from the return type of F2 to the destination type.
13.3.3.2 Ranking implicit conversion sequences [over.ics.rank]
3 - Two implicit conversion sequences of the same form are indistinguishable conversion sequences unless one of the following rules applies: [...]
- User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if they contain the same user-defined conversion function or constructor or aggregate initialization and the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2.
As I understand it, 13.3.3 allows the compiler to distinguish between different user-defined conversion operators, while 13.3.3.2 allows the compiler to distinguish between different functions (overloads of some function f
) that each require a user-defined conversion in their arguments (see my sidebar to Given the following code (in GCC 4.3) , why is the conversion to reference called twice?).
Are there any other rules that can distinguish between user-defined conversion sequences? The answer at https://stackoverflow.com/a/1384044/567292 indicates that 13.3.3.2:3 can distinguish between user-defined conversion sequences based on the cv-qualification of the implicit object parameter (to a conversion operator) or of the single non-default parameter to a constructor or aggregate initialisation, but I don't see how that can be relevant given that that would require comparison between the first standard conversion sequences of the respective user-defined conversion sequences, which the standard doesn't appear to mention.
Supposing that S1 is better than S2, where S1 is the first standard conversion sequence of U1 and S2 is the first standard conversion sequence of U2, does it follow that U1 is better than U2? In other words, is this code well-formed?
struct A {
operator int();
operator char() const;
} a;
void foo(double);
int main() {
foo(a);
}
g++ (4.5.1), Clang (3.0) and Comeau (4.3.10.1) accept it, preferring the non-const-qualified A::operator int()
, but I'd expect it to be rejected as ambiguous and thus ill-formed. Is this a deficiency in the standard or in my understanding of it?
char -> double
andint -> double
are both viable, equally ranked default conversions. ButA -> int
is preferred overconst A -> char
, because it doesn't require the additional conversionA -> const A
. – Kerrek SB