UPDATE from 2019: This post was originally on Coverity’s “Ask The Bug Guys” developer blog but that was taken down; an archived version is here. The formatting is still a bit messed up; I’ll fix it later.
Today’s episode of Ask The Bug Guys features a C# question from reader Jan:
Hi Bug Guys! We recently had a bug in our code comparing
int
s andshort
s, where callingEquals()
produced different results from using the==
operator. Also, the behaviour ofEquals()
was not symmetric. Here’s a code snippet that reproduces the behaviour we observed:
int myInt = 1; short myShort = 1; Console.WriteLine(myInt == myShort); // true Console.WriteLine(myShort == myInt); // true Console.WriteLine(myInt.Equals(myShort)); // true Console.WriteLine(myShort.Equals(myInt)); // false! |
We were quite surprised when we found this. What explains this difference? Is it better to use
==
instead ofEquals()
when comparing integer types?
Hi Jan! Thanks for the great question.
C# was designed to be a “pit of success” language: that is, a language where the obvious technique and the correct technique are the same. And for the most part, that’s true. Unfortunately, equality is one of the areas where there are significant pits of failure, and you’ve fallen right into one of them.
I’m going to add some additional cases to your program to illustrate a number of different equalities.
int myInt = 1; short myShort = 1; object objInt1 = myInt; object objInt2 = myInt; object objShort = myShort; Console.WriteLine(myInt == myShort); // scenario 1 true Console.WriteLine(myShort == myInt); // scenario 2 true Console.WriteLine(myInt.Equals(myShort)); // scenario 3 true Console.WriteLine(myShort.Equals(myInt)); // scenario 4 false! Console.WriteLine(objInt1 == objInt1); // scenario 5 true Console.WriteLine(objInt1 == objShort); // scenario 6 false!! Console.WriteLine(objInt1 == objInt2); // scenario 7 false!!! Console.WriteLine(Equals(objInt1, objInt2)); // scenario 8 true Console.WriteLine(Equals(objInt1, objShort)); // scenario 9 false!?! |
What the heck? As it turns out, we have many different kinds of equality demonstrated here.
In scenarios one and two we must first determine what the ==
operator means. C# defines over a dozen different built-in ==
operators:
object == object string == string int == int uint == uint long == long ulong == ulong ... |
There is no int == short
or short == int
operators, so the unique best match on the list of built-in operators must be determined. It turns out that the best match is int == int
. So the short
is converted to int
and then the two values are compared as numbers. They are therefore equal.
In scenario three we must first solve an overload resolution problem to determine what Equals
means. The receiver is of type int
and it has three methods named Equals
:
Equals( object , object ) // static method from object Equals( object ) // virtual method from object Equals( int ) // Implements IEquatable.Equals(int) |
The first one we can eliminate because there are not enough arguments. Of the other two, the unique best method is the one that takes an int
; it is always better to convert the short
argument to int
than to object
. Therefore we call Equals(int)
, which then compares the two integers again using value equality, so this is true.
In scenario four we again must determine what Equals
means. The receiver is of type short
which again has three methods named Equals
Equals( object , object ) // static method from object Equals( object ) // virtual method from object Equals( short ) // Implements IEquatable.Equals(short) |
Overload resolution eliminates the first because there are too few arguments and eliminates the third because there is no implicit conversion from int
to short
. That leaves short.Equals(object)
, which has the moral equivalent of this implementation:
bool Equals( object z) { return z is short && ( short )z == this ; } |
That is, for this method to return true the argument passed in must be a boxed short
, and when unboxed it must be equal to the receiver. Since the argument is a boxed int
, this returns false. There is no special gear in this implementation that says “well, what if I were to convert myself to the type of the argument and then compare?”
In scenarios five, six and seven operator overload resolution chooses the object == object
form, which is equivalent to a call to Object.ReferenceEquals
. Clearly the two references are equal in case five and unequal in cases six and seven. Whether the values of the objects when interpreted as numbers are equal does not come into play at all; only reference equality is relevant.
In scenarios eight and nine operator overload resolution chooses the static method Object.Equals
, which you can think of as being implemented like this:
public static bool Equals( object x, object y) { if (ReferenceEquals(x, y)) return true ; if (ReferenceEquals(x, null )) return false ; if (ReferenceEquals(y, null )) return false ; return x.Equals(y); } |
In scenario eight we have two references that are unequal and not null; therefore we call int.Equals(object)
, which as you would expect from our previous discussion of short.Equals(object)
is implemented as the moral equivalent of:
bool Equals( object z) { return z is int && ( int )z == this ; } |
Since the argument is of type int
it is unboxed and compared by value. In scenario nine the argument is a boxed short
and so the type check fails and this is false.
Summing up: I’ve shown nine different ways that two things can be compared for equality; despite the fact that clearly in every case we have the number one on both sides of the equality, equality is true in only half the cases. If you think this is crazy and confusing, you’re right! Equality is tricky in C#.
I’ve been looking a lot at confusing cases for equality over my last year at Coverity; two of our checkers are specifically designed to find situations where you used the wrong kind of equality. But this article is long enough already and I’ve answered your question (I hope!), so I’ll discuss the specific cases that our checkers look for in another posting.
I’ve decided that if I ever make a language, there are only going to be 2 numeric types, Integer and Float. And they’ll be arbitrary precision so they’ll grow to be as large as needed.
A terrible decision, but I probably would be bad at designing a language.
I see two problems with that approach. First, if you have arbitrary-precision floats then you already have arbitrary precision integers. Why do you want to add to the type system the restriction that the number must be integral, but not other restrictions, like must be positive, must be odd, must be prime? Second, suppose you have an arbitrary-precision float in, say, base 2. Or base 10. Whatever. You still can’t represent 1/3 such that (1 / 3) * 3 = 1 in finitely much memory.
What you really need is arbitrary-precision *rationals*. But that just pushes the problem off a little ways; with arbitrary-precision rationals you don’t have the property that sqrt(2) * sqrt(2) = 2, and you don’t have any number such that sqrt(-1) is meaningful.
Long story short: math is really hard! You really need something with the sophistication of Waterloo Maple or Mathematica to give you a really solid representation of mathematical concepts.
Hmmm… perhaps if to the end user, I just presented a Numeric class, and then internally it could handle integers and floats differently, promoting to floats if the operation . And as for representing 1/3, perhaps the language could be smart enough to know if the number repeats and store it as a rational (for base 2, it should be any divisor that has a factor other than 2). And then promote rationals to imaginary if I take the square root of a negative number…
Or I could continue to use R and the Rmpfr package, since that usually fits my needs for statistics and math heavy applications.
This is basically due to http://blogs.msdn.com/b/ericlippert/archive/2007/05/14/why-are-overloaded-operators-always-static-in-c.aspx (i.e. static methods for operators and no dynamic resolution).
I still disagree with that decision and think that getting rid of Equals at all and making operators dynamic would have been nicer, i.e. lead to less surprises for operators. But I’m the first to admit that I’m sure I’m overlooking lots of possible problems/edge cases and don’t have a great solution for others (a being zero for “a + b”; extension methods could help with that though?).
Eh, I would actually rather do away with all operator overloading by client code (including ==). I think that it’s only useful for numerical entities, and should only be implemented on numerical entities. And this hypothetical language would include in the base language all the overloaded operators for any mathematical operation (such as dividing an imaginary by an irrational). Which then wouldn’t be a general purpose language like C#. Which really is a lot of effort to solve the problem of “objInt1 == objShort” being false. And not being a general purpose language introduces a lot of other problems. I would never write a website in a math heavy language like R.
Many problems stem from the interaction between implicit conversions and overloading. Conceptually, it seems that there should be a way to distinguish between implicit conversions that are unconditionally better than anything else, and those which might in some cases not be as good as other (possibly explicit) conversions. If an `Int32` happens to be in the value from -32768 to 32767, conversion to `Int16` would be better than conversion to `Object`. The former conversion can’t be implicit because it’s not guaranteed to succeed, but since the latter conversion can’t guaranteed to be the best, a compiler diagnostic would be helpful.
I wonder if it would be possible to tally up how often code in a certain codebase intentionally does any of the following:
– Implicitly converts a value type to `Object` in contexts where an explicit conversion to some other type could have satisfied a different overload.
– Performs a possibly-imprecise implicit conversion of an integer to a floating-point type when invoking a function or operator which–had other arguments or operands been of different types–could have accepted an integer instead (e.g. someInt==someFloat or someLong==someDouble)
– Converts Single to Double when invoking a function or operator which–had other arguments or operands been of different types–could have accepted a Single. Although Double is more precise than Single, mixed operations on Single and Double are often indicative of mistakes (e.g. given “const float ScaleFactor=0.1f; double mySize;”, is the intention of “mySize *- scaleFactor;` really to multiply by 13421773/134217728?)
I would regard all three of those operations as being dubious, and think that when any of them is necessary adding an explicit cast to state programmer intention would make the code clearer. Implicit conversions are particularly insidious with equality-testing methods, but their role in overload selection can be dubious in other cases as well. If a code checker were to require the use of explicit casts in the scenarios above, I wonder how many of the squawks would be for code which should be considered just fine, how many would be for code which would work correctly but which a reviewer should reject as unclear, and how many would be for code which is broken.
If there’s anything here I’d point to as the underlying problem, it’s implicit conversions combined with the fact that equality is defined on Object at all.
All of the scenarios that evaluate to false (and maybe a few that evaluate to true) should, to my mind, be compiler errors instead. Or at least require the programmer to be very explicit about the conversions.
Compile-time overload resolution applied to the most derived type of each expression and implicitly converting from short to int both seem reasonable. The rest is all pretty dodgy.
There’s nothing wrong with having an ability to test arbitrary things for equivalence and equality (any kind of thing can simply report itself as non-equivalent and unequal to any thing of any unrecognized type). The problems I’ve seen stem from:
-1- The fact that compilers may implicitly convert comparison operands to things which are not in the same equivalence class.
-2- The use of the same name to describe the general-purpose method for equivalence-testing, and type-specific methods used for testing type-specific equality. The value 1.00d may represent the same number as 1.0d, but that doesn’t imply that it’s equivalent.
> There’s nothing wrong with having an ability to test arbitrary things for equivalence and equality (any kind of thing can simply report itself as non-equivalent and unequal to any thing of any unrecognized type).
That is the logic behind the .NET Framework’s early design, but I don’t think it is actually sound. By allowing anything to be compared to anything, you tie the compilers hands from making certain inferences.
For instance in theory the compiler could pick up that you are calling short.Equals(int) and convert the left side to an int implicitly. Unfortunately because everything (specifically object) has a concept of equality, we cannot make that assumption since converting the left object is “farther” than using a base method instead.
Not to mention it makes sense to force people to opt into complex operations (ToString and Equals seem easy, but in reality they are quite complex). The idea here being in order to even use, say Equals, you have to attempt to implement it, this avoids silly situations like having a class without an overloaded Equals but being used in a way that assumes it has an Equals that works correctly.
I completely agree on your note about equivalence being hard to define. Do you mean the same object or close enough object?
If `Equals(Object)` were named `EquivalentTo` (but other overloads were called `Equals`), then the only lines from the above code that would compile with `Equals` would be the ones that would yield `true`. Substituting `EquivalentTo`, any examples which compared different types would yield `false`, but such behavior would be consistent and not “surprising”.
As for equivalence in general being difficult, I think that’s a function of the type system’s expressiveness or lack thereof. Object references should be considered equivalent if they will always encapsulate the same state, but a reference to a mutable object like an `Int32[]`, in addition to encapsulating aspects of its state that will never change (e.g. `Length`) may also encapsulate aspects that might change, identity, both, or neither. Note that even though `Int32[]` is a mutable type, many references will only encapsulate aspects that will never change, because they’ll never be exposed to things that could change them.
If one EqualState(Object) were defined as meaning that the object encapsulates the same state as the supplied reference, and EquivalentTo(Object) as meaning that the object is guaranteed to always encapsulate the same state as the supplied reference, and if reference-type storage location could be classified as to the above usages, then equivalence and state-equality could be defined for 99% of data-holding types:
– References which encapsulate identity are equal and equivalent if they identify the same object; unequal and non-equivalent if they identify different objects
– References which encapsulate mutable aspects of state but not identity are equal and equivalent if they identify equivalent objects, equal but not equivalent if they identify equal objects that are not equivalent, and unequal and non-equivalent if they identify unequal objects (which will also be non-equivalent)
– References which only encapsulate immutable aspects of state are equal and equivalent if they identify equal objects, and unequal and non-equivalent if they identify unequal objects (which will also be non-equivalent)
An object which encapsulates multiple values and references can only be equivalent to another if all observable aspects of its state are equivalent and will never change, and equal to each other if all observable aspects of its state are equal (though they might change). Note that for a collection type to know how to judge equivalence with another collection, it must know into which category its contents fall.
Not all types would be able to judge equivalence using the above rules. Object graphs which contain cycles, for example, or objects which encapsulate part of their state in other objects to which they do not contain references, could not be evaluated in these ways (though in general, a fallback of simply saying two references to such objects are equivalent only if they identify the same object would be correct). Nonetheless, I would guess that the vast majority of types that are used to encapsulate data could benefit from being able to auto-generate equality methods based upon them.
Why does the object “==” operator does a ReferenceEquals instead of Equals? Wouldn’t be more intuitive (and avoid a few bugs) to use Equals instead?
A reference-equality test is faster than a virtual call to Equals, and a general language philosophy is that operators should be fast. Until Java added auto-boxing and auto-unboxing, the == operator could only be used to compare primitives to other primitives, or references to other references; primitives were compared by value, always, and references by reference, always. Such dual usage was generally non-problematic in the absence of overloading (though auto-unboxing muddies things somewhat); what makes it problematical is that the operator can be overloaded for reference types.
Having an operator to test reference equality is useful; the problematic aspect of == is that the same token is used for both the overloadable equality-test operator and the reference-equality test operator.
I think perhaps the problem of equality in C# stems from `Object.Equals`.
We like to think of `a.Equals(b)` to be the same as `a == b`. I know that it isn’t, but when you read the expression aloud, they’re the same. What we mean is “are these two things equal” and the assumption is that if `a == b` is true, then `b == a` should likewise be true.
As soon as you phrase the expression as `a.Equals(b)` you’re asking object a whether *it* thinks it is equal to b, which is potentially different than asking whether b thinks it is equal to a. This is a natural problem of expressing an equality check as a non-static member of a type – the instance decides what equality means relative to itself. And since `Equals` is defined on `Object`, it means that unless you’re aware of what the type of the left-hand instance really is, you don’t actually know what kind of equality check you’re going to get.
In the case of operators, the way C# is designed gives the compiler a lot more freedom to try to find a matching operator by performing implicit casts such that it will typically do the right thing and give the most-correct result.
I suppose I would prefer that Object.Equals (GetHashCode by extension) were not defined that way and instead lived on IEqualityComparer only. We would use operators for “absolute” equality checks (where there’s no ambiguity to the equality) and specific equality comparers for special cases. I would have no objection to types defining their own static Equals methods (much as they do now) as a useful convention either. The types doing the comparisons would always be clear this way and whenever we wanted to do some kind of dynamic equality comparison, we have to think a little about exactly how we’d like to make that comparison.
While it would be useful to have a means of distinguishing types for which X.Equals(Y) would imply Object.ReferenceEquals(X,Y), there’s no conceptual problem with being able to ask any object “Does this reference identify something equivalent to you”? If the object is asked about anything other than itself, and does not recognize it as being equivalent, the lack of recognition would–in and of itself–be sufficient to answer the question.
As for whether equivalence-testing would be better as an interface, I would suggest that in cases where some subtypes of a given base type will have a useful ability, code which will want to use the ability when present may also have to work with things where it’s absent, and where there’s a sensible default behavior for types which lack the ability, it’s better for the base type to include the methods associated with that ability than to segregate it into an interface which is not implemented by the base type. Since equivalence testing, and the generation of an equivalence-related hash, both meet all those criteria, I would posit that they belong in Object.
At the Coverity blog: The RSS feed links are not right.
Thanks, I’ll let the blog maintainers know.
Pingback: Dew Drop – January 16, 2014 (#1703) | Morning Dew