In N4659 16.3.3.1 Implicit conversion sequences says
10 If several different sequences of conversions exist that each convert the argument to the parameter type, the implicit conversion sequence associated with the parameter is defined to be the unique conversion sequence designated the ambiguous conversion sequence. For the purpose of ranking implicit conversion sequences as described in 16.3.3.2, the ambiguous conversion sequence is treated as a user-defined conversion sequence that is indistinguishable from any other user-defined conversion sequence [Note: This rule prevents a function from becoming non-viable because of an ambiguous conversion sequence for one of its parameters.] If a function that uses the ambiguous conversion sequence is selected as the best viable function, the call will be ill-formed because the conversion of one of the arguments in the call is ambiguous.
(The corresponding section of the current draft is 12.3.3.1)
What is the intended purpose of this rule and the concept of ambiguous conversion sequence it introduces?
The note supplied in the text states that the purpose of this rule is "to prevent a function from becoming non-viable because of an ambiguous conversion sequence for one of its parameters". Um... What does this actually refer to? The concept of a viable function is defined in the preceding sections of the document. It does not depend on ambiguity of conversions at all (conversions for each argument must exist, but they don't have to be unambiguous). And there seems to be no provision for a viable function to somehow "become non-viable" later (neither because of some ambiguity nor anything else). Viable functions are enumerated, they compete against each other for being "the best" in accordance with certain rules and if there's a single "winner", the resolution is successful. At no point in this process a viable function may (or needs) to turn into a non-viable one.
The example provided within the aforementioned paragraph is not very enlightening (i.e. it is not clear what role the above rule plays in that example).
The question originally popped up in connection with this simple example
struct S
{
operator int() const { return 0; };
operator long() const { return 0; };
};
void foo(int) {}
int main()
{
S s;
foo(s);
}
Let's just mechanically apply the above rule here. foo
is a viable function. There are two implicit conversion sequences from argument type S
to parameter type int
: S -> int
and S -> long -> int
. This means that per the above rule we have to "pack" them into a single ambiguous conversion sequence. Then we conclude that foo
is the best viable function. Then we discover that it uses our ambiguous conversion sequence. Consequently, per the above rule the code is ill-formed.
This seems to make no sense. The natural expectation here is that S -> int
conversion should be chosen, since it is ranked higher than S -> long -> int
conversion. All compilers I know follow that "natural" overload resolution.
So, what am I misunderstanding?
S -> int
andS -> long -> int
. How is there only one? Note that this "packing" of multiple conversion sequences into one ambiguous conversion sequence is done quite early: before we begin ranking implicit conversion sequences and choosing the best viable function. - AnT-> int
and-> long
parts says "Overload resolution is used to select the conversion function to be invoked." (Note that this is a nested overload resolution with its own implicit conversion sequence, which is not to be confused with the one forfoo
). In this case, overload resolution selectedoperator int
using a tie breaker ( eel.is/c++draft/over.match#best-2.2 ), becauseint -> int
is better thanlong -> int
. So there is no user defined conversion sequence here that usesoperator long
. - Johannes Schaub - litb