# ATBG: inconsistent equality

Today on the Coverity Development Testing Blog‘s continuing series Ask The Bug Guys, I take a question from reader “Jan”, who is  wondering why different equality operators produce different results. This is a confusing and difficult aspect of C#, so check it out.

As always, if you have questions about a bug you’ve found in a C, C++, C# or Java program that you think would make a good episode of ATBG, please send your question along with a small reproducer of the problem to `TheBugGuys@Coverity.com`. We cannot promise to answer every question or solve every problem, but we’ll take a selection of the best questions that we can answer and address them on the dev testing blog every couple of weeks.

Note that we have switched the blog to use WordPress; the transition has gone mostly smoothly but there are still some problems with the style sheets and not all of the content moved over quite right. Things might look a little wonky for the next while until we get it all sorted out.

## 17 thoughts on “ATBG: inconsistent equality”

1. I’ve decided that if I ever make a language, there are only going to be 2 numeric types, Integer and Float. And they’ll be arbitrary precision so they’ll grow to be as large as needed.

A terrible decision, but I probably would be bad at designing a language.

• I see two problems with that approach. First, if you have arbitrary-precision floats then you already have arbitrary precision integers. Why do you want to add to the type system the restriction that the number must be integral, but not other restrictions, like must be positive, must be odd, must be prime? Second, suppose you have an arbitrary-precision float in, say, base 2. Or base 10. Whatever. You still can’t represent 1/3 such that (1 / 3) * 3 = 1 in finitely much memory.

What you really need is arbitrary-precision *rationals*. But that just pushes the problem off a little ways; with arbitrary-precision rationals you don’t have the property that sqrt(2) * sqrt(2) = 2, and you don’t have any number such that sqrt(-1) is meaningful.

Long story short: math is really hard! You really need something with the sophistication of Waterloo Maple or Mathematica to give you a really solid representation of mathematical concepts.

• Hmmm… perhaps if to the end user, I just presented a Numeric class, and then internally it could handle integers and floats differently, promoting to floats if the operation . And as for representing 1/3, perhaps the language could be smart enough to know if the number repeats and store it as a rational (for base 2, it should be any divisor that has a factor other than 2). And then promote rationals to imaginary if I take the square root of a negative number…

Or I could continue to use R and the Rmpfr package, since that usually fits my needs for statistics and math heavy applications.

• This is basically due to http://blogs.msdn.com/b/ericlippert/archive/2007/05/14/why-are-overloaded-operators-always-static-in-c.aspx (i.e. static methods for operators and no dynamic resolution).

I still disagree with that decision and think that getting rid of Equals at all and making operators dynamic would have been nicer, i.e. lead to less surprises for operators. But I’m the first to admit that I’m sure I’m overlooking lots of possible problems/edge cases and don’t have a great solution for others (a being zero for “a + b”; extension methods could help with that though?).

• Eh, I would actually rather do away with all operator overloading by client code (including ==). I think that it’s only useful for numerical entities, and should only be implemented on numerical entities. And this hypothetical language would include in the base language all the overloaded operators for any mathematical operation (such as dividing an imaginary by an irrational). Which then wouldn’t be a general purpose language like C#. Which really is a lot of effort to solve the problem of “objInt1 == objShort” being false. And not being a general purpose language introduces a lot of other problems. I would never write a website in a math heavy language like R.

2. Many problems stem from the interaction between implicit conversions and overloading. Conceptually, it seems that there should be a way to distinguish between implicit conversions that are unconditionally better than anything else, and those which might in some cases not be as good as other (possibly explicit) conversions. If an `Int32` happens to be in the value from -32768 to 32767, conversion to `Int16` would be better than conversion to `Object`. The former conversion can’t be implicit because it’s not guaranteed to succeed, but since the latter conversion can’t guaranteed to be the best, a compiler diagnostic would be helpful.

I wonder if it would be possible to tally up how often code in a certain codebase intentionally does any of the following:

– Implicitly converts a value type to `Object` in contexts where an explicit conversion to some other type could have satisfied a different overload.

– Performs a possibly-imprecise implicit conversion of an integer to a floating-point type when invoking a function or operator which–had other arguments or operands been of different types–could have accepted an integer instead (e.g. someInt==someFloat or someLong==someDouble)

– Converts Single to Double when invoking a function or operator which–had other arguments or operands been of different types–could have accepted a Single. Although Double is more precise than Single, mixed operations on Single and Double are often indicative of mistakes (e.g. given “const float ScaleFactor=0.1f; double mySize;”, is the intention of “mySize *- scaleFactor;` really to multiply by 13421773/134217728?)

I would regard all three of those operations as being dubious, and think that when any of them is necessary adding an explicit cast to state programmer intention would make the code clearer. Implicit conversions are particularly insidious with equality-testing methods, but their role in overload selection can be dubious in other cases as well. If a code checker were to require the use of explicit casts in the scenarios above, I wonder how many of the squawks would be for code which should be considered just fine, how many would be for code which would work correctly but which a reviewer should reject as unclear, and how many would be for code which is broken.

3. If there’s anything here I’d point to as the underlying problem, it’s implicit conversions combined with the fact that equality is defined on Object at all.

All of the scenarios that evaluate to false (and maybe a few that evaluate to true) should, to my mind, be compiler errors instead. Or at least require the programmer to be very explicit about the conversions.

Compile-time overload resolution applied to the most derived type of each expression and implicitly converting from short to int both seem reasonable. The rest is all pretty dodgy.

• There’s nothing wrong with having an ability to test arbitrary things for equivalence and equality (any kind of thing can simply report itself as non-equivalent and unequal to any thing of any unrecognized type). The problems I’ve seen stem from:

-1- The fact that compilers may implicitly convert comparison operands to things which are not in the same equivalence class.

-2- The use of the same name to describe the general-purpose method for equivalence-testing, and type-specific methods used for testing type-specific equality. The value 1.00d may represent the same number as 1.0d, but that doesn’t imply that it’s equivalent.

• > There’s nothing wrong with having an ability to test arbitrary things for equivalence and equality (any kind of thing can simply report itself as non-equivalent and unequal to any thing of any unrecognized type).

That is the logic behind the .NET Framework’s early design, but I don’t think it is actually sound. By allowing anything to be compared to anything, you tie the compilers hands from making certain inferences.

For instance in theory the compiler could pick up that you are calling short.Equals(int) and convert the left side to an int implicitly. Unfortunately because everything (specifically object) has a concept of equality, we cannot make that assumption since converting the left object is “farther” than using a base method instead.

Not to mention it makes sense to force people to opt into complex operations (ToString and Equals seem easy, but in reality they are quite complex). The idea here being in order to even use, say Equals, you have to attempt to implement it, this avoids silly situations like having a class without an overloaded Equals but being used in a way that assumes it has an Equals that works correctly.

I completely agree on your note about equivalence being hard to define. Do you mean the same object or close enough object?

• If `Equals(Object)` were named `EquivalentTo` (but other overloads were called `Equals`), then the only lines from the above code that would compile with `Equals` would be the ones that would yield `true`. Substituting `EquivalentTo`, any examples which compared different types would yield `false`, but such behavior would be consistent and not “surprising”.

As for equivalence in general being difficult, I think that’s a function of the type system’s expressiveness or lack thereof. Object references should be considered equivalent if they will always encapsulate the same state, but a reference to a mutable object like an `Int32[]`, in addition to encapsulating aspects of its state that will never change (e.g. `Length`) may also encapsulate aspects that might change, identity, both, or neither. Note that even though `Int32[]` is a mutable type, many references will only encapsulate aspects that will never change, because they’ll never be exposed to things that could change them.

If one EqualState(Object) were defined as meaning that the object encapsulates the same state as the supplied reference, and EquivalentTo(Object) as meaning that the object is guaranteed to always encapsulate the same state as the supplied reference, and if reference-type storage location could be classified as to the above usages, then equivalence and state-equality could be defined for 99% of data-holding types:

– References which encapsulate identity are equal and equivalent if they identify the same object; unequal and non-equivalent if they identify different objects

– References which encapsulate mutable aspects of state but not identity are equal and equivalent if they identify equivalent objects, equal but not equivalent if they identify equal objects that are not equivalent, and unequal and non-equivalent if they identify unequal objects (which will also be non-equivalent)

– References which only encapsulate immutable aspects of state are equal and equivalent if they identify equal objects, and unequal and non-equivalent if they identify unequal objects (which will also be non-equivalent)

An object which encapsulates multiple values and references can only be equivalent to another if all observable aspects of its state are equivalent and will never change, and equal to each other if all observable aspects of its state are equal (though they might change). Note that for a collection type to know how to judge equivalence with another collection, it must know into which category its contents fall.

Not all types would be able to judge equivalence using the above rules. Object graphs which contain cycles, for example, or objects which encapsulate part of their state in other objects to which they do not contain references, could not be evaluated in these ways (though in general, a fallback of simply saying two references to such objects are equivalent only if they identify the same object would be correct). Nonetheless, I would guess that the vast majority of types that are used to encapsulate data could benefit from being able to auto-generate equality methods based upon them.

4. Why does the object “==” operator does a ReferenceEquals instead of Equals? Wouldn’t be more intuitive (and avoid a few bugs) to use Equals instead?

• A reference-equality test is faster than a virtual call to Equals, and a general language philosophy is that operators should be fast. Until Java added auto-boxing and auto-unboxing, the == operator could only be used to compare primitives to other primitives, or references to other references; primitives were compared by value, always, and references by reference, always. Such dual usage was generally non-problematic in the absence of overloading (though auto-unboxing muddies things somewhat); what makes it problematical is that the operator can be overloaded for reference types.

Having an operator to test reference equality is useful; the problematic aspect of == is that the same token is used for both the overloadable equality-test operator and the reference-equality test operator.

5. I think perhaps the problem of equality in C# stems from `Object.Equals`.

We like to think of `a.Equals(b)` to be the same as `a == b`. I know that it isn’t, but when you read the expression aloud, they’re the same. What we mean is “are these two things equal” and the assumption is that if `a == b` is true, then `b == a` should likewise be true.

As soon as you phrase the expression as `a.Equals(b)` you’re asking object a whether *it* thinks it is equal to b, which is potentially different than asking whether b thinks it is equal to a. This is a natural problem of expressing an equality check as a non-static member of a type – the instance decides what equality means relative to itself. And since `Equals` is defined on `Object`, it means that unless you’re aware of what the type of the left-hand instance really is, you don’t actually know what kind of equality check you’re going to get.

In the case of operators, the way C# is designed gives the compiler a lot more freedom to try to find a matching operator by performing implicit casts such that it will typically do the right thing and give the most-correct result.

I suppose I would prefer that Object.Equals (GetHashCode by extension) were not defined that way and instead lived on IEqualityComparer only. We would use operators for “absolute” equality checks (where there’s no ambiguity to the equality) and specific equality comparers for special cases. I would have no objection to types defining their own static Equals methods (much as they do now) as a useful convention either. The types doing the comparisons would always be clear this way and whenever we wanted to do some kind of dynamic equality comparison, we have to think a little about exactly how we’d like to make that comparison.

• While it would be useful to have a means of distinguishing types for which X.Equals(Y) would imply Object.ReferenceEquals(X,Y), there’s no conceptual problem with being able to ask any object “Does this reference identify something equivalent to you”? If the object is asked about anything other than itself, and does not recognize it as being equivalent, the lack of recognition would–in and of itself–be sufficient to answer the question.

As for whether equivalence-testing would be better as an interface, I would suggest that in cases where some subtypes of a given base type will have a useful ability, code which will want to use the ability when present may also have to work with things where it’s absent, and where there’s a sensible default behavior for types which lack the ability, it’s better for the base type to include the methods associated with that ability than to segregate it into an interface which is not implemented by the base type. Since equivalence testing, and the generation of an equivalence-related hash, both meet all those criteria, I would posit that they belong in Object.

6. At the Coverity blog: The RSS feed links are not right.

• Thanks, I’ll let the blog maintainers know.