352
votes

The C# compiler requires that whenever a custom type defines operator ==, it must also define != (see here).

Why?

I'm curious to know why the designers thought it necessary and why can't the compiler default to a reasonable implementation for either of the operators when only the other is present. For example, Lua lets you define only the equality operator and you get the other for free. C# could do the same by asking you to define either == or both == and != and then automatically compile the missing != operator as !(left == right).

I understand that there are weird corner cases where some entities may neither be equal nor unequal, (like IEEE-754 NaN's), but those seem like the exception, not the rule. So this doesn't explain why the C# compiler designers made the exception the rule.

I've seen cases of poor workmanship where the equality operator is defined, then the inequality operator is a copy-paste with each and every comparison reversed and every && switched to a || (you get the point... basically !(a==b) expanded through De Morgan's rules). That's poor practice that the compiler could eliminate by design, as is the case with Lua.

Note: The same holds for operators < > <= >=. I can't imagine cases where you'll need to define these in unnatural ways. Lua lets you define only < and <= and defines >= and > naturally through the formers' negation. Why doesn't C# do the same (at least 'by default')?

EDIT

Apparently there are valid reasons to allow the programmer to implement checks for equality and inequality however they like. Some of the answers point to cases where that may be nice.

The kernel of my question, however, is why this is forcibly required in C# when usually it's not logically necessary?

It is also in striking contrast to design choices for .NET interfaces like Object.Equals, IEquatable.Equals IEqualityComparer.Equals where the lack of a NotEquals counterpart shows that the framework considers !Equals() objects as unequal and that's that. Furthermore, classes like Dictionary and methods like .Contains() depend exclusively on the aforementioned interfaces and do not use the operators directly even if they are defined. In fact, when ReSharper generates equality members, it defines both == and != in terms of Equals() and even then only if the user chooses to generate operators at all. The equality operators aren't needed by the framework to understand object equality.

Basically, the .NET framework doesn't care about these operators, it only cares about a few Equals methods. The decision to require both == and != operators to be defined in tandem by the user is related purely to the language design and not object semantics as far as .NET is concerned.

13
+1 for an excellent question. There doesn't seem to be any reason at all... of course the compiler could assume that a != b can be translated into !(a == b) unless it's told otherwise. I'm curious to know why they decided to do this, too.Patrick87
This is the fastest I've seen a question voted up and favorited.Christopher Currens
I'm sure @Eric Lippert can provide a sound answer as well.Yuck
"The scientist is not a person who gives the right answers, he's one who asks the right questions". This is why in SE, questions are encouraged. And it works!Adriano Carneiro
The same question for Python: stackoverflow.com/questions/4969629/…Josh Lee

13 Answers

165
votes

I can't speak for the language designers, but from what I can reason on, it seems like it was intentional, proper design decision.

Looking at this basic F# code, you can compile this into a working library. This is legal code for F#, and only overloads the equality operator, not the inequality:

module Module1

type Foo() =
    let mutable myInternalValue = 0
    member this.Prop
        with get () = myInternalValue
        and set (value) = myInternalValue <- value

    static member op_Equality (left : Foo, right : Foo) = left.Prop = right.Prop
    //static member op_Inequality (left : Foo, right : Foo) = left.Prop <> right.Prop

This does exactly what it looks like. It creates an equality comparer on == only, and checks to see if the internal values of the class are equal.

While you can't create a class like this in C#, you can use one that was compiled for .NET. It's obvious it will use our overloaded operator for == So, what does the runtime use for !=?

The C# EMCA standard has a whole bunch of rules (section 14.9) explaining how to determine which operator to use when evaluating equality. To put it overly-simplified and thus not perfectly accurate, if the types that are being compared are of the same type and there is an overloaded equality operator present, it will use that overload and not the standard reference equality operator inherited from Object. It is no surprise, then, that if only one of the operators is present, it will use the default reference equality operator, that all objects have, there is not an overload for it.1

Knowing that this is the case, the real question is: Why was this designed in this way and why doesn't the compiler figure it out on its own? A lot people are saying this wasn't a design decision, but I like to think it was thought out this way, especially regarding the fact all objects have a default equality operator.

So, why doesn't the compiler automagically create the != operator? I can't know for sure unless someone from Microsoft confirms this, but this is what I can determine from reasoning on the facts.


To prevent unexpected behavior

Perhaps I want to do a value comparison on == to test equality. However, when it came to != I didn't care at all if the values were equal unless the reference was equal, because for my program to consider them equal, I only care if the references match. After all, this is actually outlined as default behavior of the C# (if both operators were not overloaded, as would be in case of some .net libraries written in another language). If the compiler was adding in code automatically, I could no longer rely on the compiler to output code that should is compliant. The compiler should not write hidden code that changes the behavior of yours, especially when the code you've written is within standards of both C# and the CLI.

In terms of it forcing you to overload it, instead of going to the default behavior, I can only firmly say that it is in the standard (EMCA-334 17.9.2)2. The standard does not specify why. I believe this is due to the fact that C# borrows much behavior from C++. See below for more on this.


When you override != and ==, you do not have to return bool.

This is another likely reason. In C#, this function:

public static int operator ==(MyClass a, MyClass b) { return 0; }

is as valid as this one:

public static bool operator ==(MyClass a, MyClass b) { return true; }

If you're returning something other than bool, the compiler cannot automatically infer an opposite type. Furthermore, in the case where your operator does return bool, it just doesn't make sense for them create generate code that would only exist in that one specific case or, as I said above, code that hides the default behavior of the CLR.


C# borrows much from C++3

When C# was introduced, there was an article in MSDN magazine that wrote, talking about C#:

Many developers wish there was a language that was easy to write, read, and maintain like Visual Basic, but that still provided the power and flexibility of C++.

Yes the design goal for C# was to give nearly the same amount of power as C++, sacrificing only a little for conveniences like rigid type-safety and garbage-collection. C# was strongly modeled after C++.

You may not be surprised to learn that in C++, the equality operators do not have to return bool, as shown in this example program

Now, C++ does not directly require you to overload the complementary operator. If your compiled the code in the example program, you will see it runs with no errors. However, if you tried adding the line:

cout << (a != b);

you will get

compiler error C2678 (MSVC) : binary '!=' : no operator found which takes a left-hand operand of type 'Test' (or there is no acceptable conversion)`.

So, while C++ itself doesn't require you to overload in pairs, it will not let you use an equality operator that you haven't overloaded on a custom class. It's valid in .NET, because all objects have a default one; C++ does not.


1. As a side note, the C# standard still requires you to overload the pair of operators if you want to overload either one. This is a part of the standard and not simply the compiler. However, the same rules regarding the determination of which operator to call apply when you're accessing a .net library written in another language that doesn't have the same requirements.

2. EMCA-334 (pdf) (http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-334.pdf)

3. And Java, but that's really not the point here

54
votes

Probably for if someone needs to implement three-valued logic (i.e. null). In cases like that - ANSI standard SQL, for instance - the operators can't simply be negated depending on the input.

You could have a case where:

var a = SomeObject();

And a == true returns false and a == false also returns false.

25
votes

Other than that C# defers to C++ in many areas, the best explanation I can think of is that in some cases you might want to take a slightly different approach to proving "not equality" than to proving "equality".

Obviously with string comparison, for example, you can just test for equality and return out of the loop when you see nonmatching characters. However, it might not be so clean with more complicated problems. The bloom filter comes to mind; it's very easy to quickly tell if the element is not in the set, but difficult to tell if the element is in the set. While the same return technique could apply, the code might not be as pretty.

23
votes

If you look at implementations of overloads of == and != in the .net source, they often don't implement != as !(left == right). They implement it fully (like ==) with negated logic. For example, DateTime implements == as

return d1.InternalTicks == d2.InternalTicks;

and != as

return d1.InternalTicks != d2.InternalTicks;

If you (or the compiler if it did it implicitly) were to implement != as

return !(d1==d2);

then you are making an assumption about the internal implementation of == and != in the things your class is referencing. Avoiding that assumption may be the philosophy behind their decision.

18
votes

To answer your edit, regarding why you are forced to override both if you override one, it's all in the inheritance.

If you override ==, most likely to provide some sort of semantic or structural equality (for instance, DateTimes are equal if their InternalTicks properties are equal even through they may be different instances), then you are changing the default behavior of the operator from Object, which is the parent of all .NET objects. The == operator is, in C#, a method, whose base implementation Object.operator(==) performs a referential comparison. Object.operator(!=) is another, different method, which also performs a referential comparison.

In almost any other case of method overriding, it would be illogical to presume that overriding one method would also result in a behavioral change to an antonymic method. If you created a class with Increment() and Decrement() methods, and overrode Increment() in a child class, would you expect Decrement() to also be overridden with the opposite of your overridden behavior? The compiler can't be made smart enough to generate an inverse function for any implementation of an operator in all possible cases.

However, operators, though implemented very similarly to methods, conceptually work in pairs; == and !=, < and >, and <= and >=. It would be illogical in this case from the standpoint of a consumer to think that != worked any differently than ==. So, the compiler can't be made to assume that a!=b == !(a==b) in all cases, but it's generally expected that == and != should operate in a similar fashion, so the compiler forces you to implement in pairs, however you actually end up doing that. If, for your class, a!=b == !(a==b), then simply implement the != operator using !(==), but if that rule does not hold in all cases for your object (for instance, if comparison with a particular value, equal or unequal, is not valid), then you have to be smarter than the IDE.

The REAL question that should be asked is why < and > and <= and >= are pairs for comparative operators that must be implemented concurrently, when in numeric terms !(a < b) == a >= b and !(a > b) == a <= b. You should be required to implement all four if you override one, and you should probably be required to override == (and !=) as well, because (a <= b) == (a == b) if a is semantically equal to b.

15
votes

If you overload == for your custom type, and not != then it will be handled by the != operator for object != object since everything is derived from object, and this would be much different than CustomType != CustomType.

Also the language creators probably wanted it this way to allow the most most flexibility for coders, and also so that they are not making assumptions about what you intend to do.

11
votes

This is what comes to my mind first:

  • What if testing inequality is much faster than testing equality?
  • What if in some cases you want to return false both for == and != (i.e. if they can't be compared for some reason)
6
votes

Well, it's probably just a design choice, but as you say, x!= y doesn't have to be the same as !(x == y). By not adding a default implementation, you are certain that you cannot forget to implement a specific implementation. And if it's indeed as trivial as you say, you can just implement one using the other. I don't see how this is 'poor practise'.

There may be some other differences between C# and Lua too...

6
votes

The key words in your question are "why" and "must".

As a result:

Answering it's this way because they designed it to be so, is true ... but not answering the "why" part of your question.

Answering that it might sometimes be helpful to override both of these independently, is true ... but not answering the "must" part of your question.

I think the simple answer is that there isn't any convincing reason why C# requires you to override both.

The language should allow you to override only ==, and provide you a default implementation of != that is ! that. If you happen to want to override != as well, have at it.

It wasn't a good decision. Humans design languages, humans aren't perfect, C# isn't perfect. Shrug and Q.E.D.

5
votes

Just to add to the excellent answers here:

Consider what would happen in the debugger, when you try to step into a != operator and end up in an == operator instead! Talk about confusing!

It makes sense that CLR would allow you the freedom to leave out one or other of the operators - as it must work with many languages. But there are plenty of examples of C# not exposing CLR features (ref returns and locals, for example), and plenty of examples of implementing features not in the CLR itself (eg: using, lock, foreach, etc).

3
votes

Programming languages are syntactical rearrangements of exceptionally complex logical statement. With that in mind, can you define a case of equality without defining a case of non-equality? The answer is no. For an object a to be equal to object b, then the inverse of object a does not equal b must also be true. Another way to show this is

if a == b then !(a != b)

this provides the definite ability for the language to determine the equality of objects. For instance, the comparison NULL != NULL can throw a wrench into the definition of a equality system that does not implement a non-equality statement.

Now, in regards to the idea of != simply being replaceable definition as in

if !(a==b) then a!=b

I can't argue with that. However, it was most likely a decision by the C# language specification group that the programmer be forced to explicitly define the equality and and non-equality of an object together

2
votes

In short, forced consistency.

'==' and '!=' are always true opposites, no matter how you define them, defined as such by their verbal definition of "equals" and "not equals." By only defining one of them, you open yourself up to an equality operator inconsistency where both '==' and '!=' can both be true or both be false for two given values. You must define both since when you elect to define one, you must also define the other appropriately so that it is blatantly clear what your definition of "equality" is. The other solution for the compiler is to only allow you to override '==' OR '!=' and leave the other as inherently negating the other. Obviously, that isn't the case with the C# compiler and I'm sure there's a valid reason for that that may be attributable strictly as a choice of simplicity.

The question you should be asking is "why do I need to override the operators?" That is a strong decision to make which requires strong reasoning. For objects, '==' and '!=' compare by reference. If you are to override them to NOT compare by reference, you are creating a general operator inconsistency that is not apparent to any other developer who would peruse that code. If you are attempting to ask the question "is the state of these two instances equivalent?," then you should implement IEquatible, define Equals() and utilize that method call.

Lastly, IEquatable() does not define NotEquals() for the same reasoning: potential to open up equality operator inconsistencies. NotEquals() should ALWAYS return !Equals(). By opening up the definition of of NotEquals() to the class implementing Equals(), you are once again forcing the issue of consistency in determining equality.

Edit: This is simply my reasoning.

-3
votes

Probably just something they didn't think of of didn't have time to do.

I always use your method when I overload ==. Then I just use it in the other one.

You're right, with a small amount of work, the compiler could give this to us for free.