Bottom ten list

Hey everyone, I am finally back from my many travels this summer and looking forward to doing some blogging this autumn. I’ll post some vacation photos when I have them sorted out. Until then, here’s an article that the nice people at InformIT asked me to write: what are my ten least favourite C# features?

I’m sure I missed some of your least favourites; I’d love to know what your pet peeves are in the comments here or on the article itself.

54 thoughts on “Bottom ten list

  1. A feature I really dislike that you didn’t mention is the switch statement.

    switch (x)
    {
    case 4:
    int y = 8;
    break; // Eww, why are we using control flow statement to delimit a chunk of code

    case 9:
    int y = -2; // Error: y is already declared in this scope because the entire switch statement, lord knows why, is one scope.
    return; // Can use anything as long as the endpoint is unreachable
    }

    At least it doesn’t have implicit fallthrough, but I’d have much rather have seen something properly scoped in a C# style syntax, like

    switch (x)
    {
    case 4
    {
    int y = 8;
    }
    }

    Or a more expression-like approach

    switch (x)
    {
    4 => { stuff }
    }

    I’m sure people can and have come up with much nicer things in various languages.
    }

    • +1 one. This is an even better example of where the C# 1.0 team unnecessarily followed convention established by previous C-like languages. They at least got ride of accidental fall-through errors, but only by making the code unnecessarily verbose.

  2. You forgot `GetHashCode` in #9. That being said, it should have been part of an `IHashable` interface in the first place.

    I’m under the impression many of the proposed changes you list (especially in the dishonorable mentions) would make C# much more like VB, perhaps to the point of blurring the line between the two languages. I must say I *like* the compactness of the C syntax, I wouldn’t want to write VB code (I was forced to do that actually for some time, and it wasn’t a pleasing experience for someone coming from a C++ background).

    • BASIC and the languages that spawned from it were meant to be simple and explicit. So any attempt to make a language simpler is going to head in that direction. Compare that to C which was designed for people looking to build operating systems.

      Of course VB and C# have very different target audiences so just making C# into VB wouldn’t work.

      • Yeah, except that the VB language is much bigger than the C# language (more keywords, more syntax variations, etc.). Also because of its origins (very simple Dartmouth Basic, then VB, then .NET), it has a lot of oddball inconsistencies (declaring and initializing arrays, for example). I find I’m often surprised by the VB syntax someone else writes (and I’ve been around VB a long time, though I’ve never really been an everyday VB programmer).
        C-derived languages can be too terse and too symbol-laden for “simple programmers”, but the C# language really is simpler than VB

  3. Nitpick: implementing CompareTo is not really enough if you are implementing the operators for something that involves floating point values, e.g. a pair of float and int – assuming you want IEEE NaN handling.

    It feels that in general equality checks are required way more often than relative comparisons, so it’s a bigger problem to have redundant Equals interfaces than it is to have <. and >=.

    I guess you could imagine only 4 required overloads (==, !=, <, and >= from them.

    • Brackets got mostly eaten in this comment. The last sentence was supposed to mean that you can derive 2 out of 6 operators by swapping arguments that works for floating point as well, which slightly reduces the implementation effort but makes the interface weird and non-symmetrical.

      I wonder if there’s a nice solution that works for floats… Maybe define a CompareTo interface that would return “Less / Equal / Greater / Unordered” enum?

      • While it’s possible to write a sort method for a consistent partial ordering, it’s not possible to write an efficient one in the general case [if the partial ordering almost never returns “unordered”, a sort could run almost as fast as for a fully-ordered list, but if e.g. one had a million-item list containing only two non-NaN values, there would be no way to rank those two non-NaN values without performing 500 billion comparisons]. Can you think of any practical use for a `CompareTo` method which does not define a full transitive ordering?

  4. #1:

    Some form of variance is needed unless you want to copy potentially-huge arrays every time you pass them. Rust doesn’t have *that kind* of variance (for good reasons) and you need either lots of duplicate code (making everything generic and creating shims) or C-style cast abuse to get around it.

    • Of course, variance should have been limited to immutable things in the first place, but C# 1.0 didn’t have immutable generic collections.

  5. Pingback: Dew Drop – August 19, 2015 (#2072) | Morning Dew

  6. №1
    CLR rather C#: struct’s default value and equality+hashcode.
    Sometimes it is simply impossible to make a default value valid without additional data. Cannot define default constructor, cannot ‘remove’ it either.

    (Yes, yes, I know: http://stackoverflow.com/questions/333829/why-cant-i-define-a-default-constructor-for-a-struct-in-net)

    №2
    integer to enum conversion can produce value which is not defined in enum, but no error is thrown at run time. ‘checked’ block has not effect.

    №3
    Default method parameters work as a simple constant and are used by the caller. So if defaults are changed client code must be recompiled.

    №4
    Anonymous types do not have default constructor.

  7. As someone who writes a lot of fairly low-level code in C#, I disagree with some of the things you say there. I suppose I could live without ++, but having to change all my “byte” and “int” into a special non-integer bit-bucket types sounds like a step too far.

    If “int8” and “int32” or whatever such “bit bucket” types might be called work the same as integers it might work. But if conversions to/from int have to be explicit, it might be taking this too far. And if they are implicit, then the benefit is questionable.

    I’ve just scanned my local checkouts and the “<>” occurs in 2418 files (half of which are probably from generic types, but still). That’s a LOT of C# code. An arithmetic coding reader. In-memory bitmap manipulations. Rsync checksum. Xorshift pseudo-random generator. Fast 2D array indexing. An LZW codec. A custom hash table optimized for a specific scenario. You name it, tons of code require these, and I don’t really see a special type working for these. The bit shift is an important operation to have, just because it’s naturally super fast.

    Disappointed that “byte xor byte gives you int” is not on this list, but I suppose it’s consistent with your take that people don’t use C# for this stuff. That’s just not true. I might agree that most code is business logic code, but far from all of it.

    • Unfortunately the bit shift operators got messed up in the above comment, and there is no “edit” feature. Oh well. Where it counts occurrences, it should read:

      “I’ve just scanned my local checkouts and the left-shift occurs in a whopping 1148 files, while right-shift occurs in 2418 files (half of which are probably from generic types, but still)”

  8. My two biggest pet peeves in C# are enums and switch statements.

    Enums- partially for the reason you mention- they conflate bags of combinable options with a set of discrete options (which are kind of opposite things). Also, they are extremely limited compared to the enums of Java (which can have methods, represent strings, etc). What is even more stunning though is that the TypeScript design team copied C#’s enums almost verbatim, with all of their associated problems. This despite heavy lobbying from the community to improve on the design. Now TypeScript has a similarly broken enum feature that doesn’t look like anything else in JavaScript.

    Switch statements- it’s silly to have to use break to delimit clauses, where brackets are used everywhere else in the language. It’s also a tragedy because it is now greatly affecting the language design around pattern matching.

  9. I’m glad you mentioned this:

    ===========
    What I find worse about these operators is my inability to remember which statement accurately describes “x++”:

    The operator comes after the operand, so the result is the value it has after the increment.
    The operand comes before the operator, so the result is the value it had before the increment.

    Both mnemonics make perfect sense—and they contradict each other.
    ===========

    For some reason it makes me feel better that someone smarter than I am has run into this same issue.

    • I’ve always thought that if to understand a piece of code one needs to know (that there is a) the difference between i++ and ++i, then the code is broken. I have never run into a piece of code where using the result of i++ or ++i made sense, and should not have instead used two separate statements.

      • Many expressions that use the pre- and postfix increment/decrement operators use them only for the side-effect, completely ignoring the value ‘returned’ from the expression. In those cases the distinction between pre and post doesn’t really matter (except maybe for the sequencing of the side-effect, which is another can of worms in C and C++ – less so in C#, I think).

        However, if the value of the expression is used, such as in a loop condition, or the idiomatic “copy an element from one array to another, then move to next”: *dst++ = *src++, then the distinction between pre and post operators is critical. The two operators produce different results, so knowing the difference is required.

        And keep in mind that you will need to understand this whether or not you want to write it because you will need to deal with code written by others. I can still recall dealing with a small bug related to code that used: *dst++ = *–src;

  10. LIke a few other commenters I think there’s other things that are probably worse than what was listed (nullability by default, mutable value types and switch statements are my main complaints) but perhaps that is because this article is more from the point of view of a language designer than a language user?

  11. Worst feature of C#: C inheritance…
    Really, I hate it, BASIC inheritance is bad for being way too verbose, but C is dumb, brackets are in an inconvenient position in keyboard, forces you choose between an identation schema where either the open bracket is far from the high visible start of the line or takes a line for nothing, semicolon ending statements where 99.999% of the lines are single statement and the other 0.001% are unreadable, why not just the semicolon (or any other easy to type but more visible character) to end blocks and line breaks to end statements?

  12. #region #endregion are completely horrible. The ability to hide code is terrible. If you are really trying to hide 1000s of lines of code put it into another method or file.

    • If a piece of code containing a loop is only used once within a loop, some corner cases should be handled outside the outer loop, and some within the inner loop, being able to see the code in context can make it visually obvious whether a corner case is handled redundantly or accidentally missed.

      The #region tag is also useful in cases where a class may end up with a huge amount of boilerplate code to accomplish things which it should have been possible to express more compactly. Is it really more helpful to see

      IEnumerator IEnumerable.GetEnumerator()
      {
      return GetEnumerator();
      }

      than a collapsed

      #region Usual IEnumerable.GetEnumerator implementation.

      The #region tag, properly used, can make it easier to view what’s going on in different levels of detail. It can be abused, but the same is true of any tool. IMHO, a compiler should enforce that a #region tag which starts in any scope must end in the same scope, but beyond that I see no reason to dislike the directives.

      • You’re saying it’s preferable to hide code that’s called once rather than putting it into its own method? We might have to disagree on that one 🙂

        I haven’t seen anything good come out of #regions, just problems. Its particularly an issue in code that is maintained by many different developers. #region is not a language feature, it’s an IDE feature.

        • > Is it more useful to see boilerplate code or a summary.
          It’s more useful to see the one line summary. Until the day some co-worker decides to add some code to one of those boilerplate methods. And on that day, or week, depending how long it took you to find the bug, you swear never, ever, never, ever to use #region ever again.

          Just sayin’.

          • I’m not sure how that situation would be any worse than your coworker adds logic to an a method like “bool OgreSeesPlayer” which also checks whether the Ogre has fallen in water and triggers a “swim” action if so. If when inspecting code you start with all regions expanded and only contract them once you’ve confirmed that they don’t contain anything unexpected, I don’t see why they should be a particular problem. If you don’t as a matter of habit expand regions until you’ve examined their contents, the fix for that would be to take up that habit.

  13. For once, I don’t actually have any problem with C#. None! It successfully combines the power of C++, the cross-platform compatibilty of Java (now, thanks to Mono/Xamarin), and the visual abilities of Delphi. And there’s nothing like Roslyn in any other language I’ve herad of (except Forth, perhaps — and TCL is also extensible).
    I remember the verbal battles about the keyword dynamic being a crime against type-safe humanity, or how the extension methods mar the purity of OO… Haters gonna hate. If you don’t like a feature, don’t use it. As simple as that. Oh yes, you might also say ‘thank you’ to the people who have made so many features work that the users now can choose their most (or least) favourite ones.

    • I’m glad you like C#! But saying “if you don’t like it, don’t use it” presumes that the only code you work on is your own; most of us work on code we inherited from large teams working on previous versions. The argument I heard most often against dynamic was “this will make it easy for bad programmers to write brittle code that I’m going to have to fix in the future”.

  14. One additional comment on type first being a mistake, from the perspective of somebody currently designing their first statically-typed language: it’s hard to overstate how much type-after-declaration simplifies the design of the parser. Putting the type first means your parser has to either know which identifiers are valid type names in advance, which requires a feedback loop that makes the design horrible and reduces potential for concurrency, or it has to perform lookahead as far as the end of the declaration to be sure what it’s looking at. Either of these will make the parser slower. I’m trying hard to design my language to not require any look ahead or feedback loops to process, so that the parser can be as fast as possible. This makes me wonder how much faster C#’s compiler would have been if it used a type-after declaration syntax.

    • C# was designed so that the parser could be pretty quick, but yes, the fact that types can come first on many kinds of things does make the parser’s job harder. Technically, C# requires arbitrarily deep look-ahead, but the cases where it has to look far ahead are not usually encountered in realistic code.

  15. My pet peeve is the loss of C++ style destructors. To be able to just declare a local variable and for the compiler to insert a call when it goes out of scope without having to mess about with using or finally.

    • It sounds like you want to be able to declare that some types get automatic destruction when used as local variables.

      What happens when your local variable is actually inside an iterator method? To have deterministic destruction of your local, you’d also have to have deterministic destruction of your iterator. Of course, there are not just iterators, but lambdas and async blocks as well where local variables aren’t really local. Needless to say, these types aren’t going to be very useful unless you can return one of these objects from your function or pass one as a parameter to another function.

      And this is how you come up with automatic reference counting, copy constructors, move constructors, and all the other mess that you have to know about if you want to write idiomatic C++.

      You can avoid all that mess by simply telling the compiler which variables to destruct when they go out of scope.

      The C# team actually implemented this and decided it was not a feasible language feature. See https://github.com/dotnet/roslyn/issues/161 for a discussion.

  16. Pingback: Nature photography | Fabulous adventures in coding

  17. Bitwise operators and their silly preference for signed integers (be it 32 or 64 bit integers). It’s far more common to see bit fiddling that prefers to operate on an unsigned integer than bit fiddling on a signed integer.

  18. The problem with equality (#9) is that there are two kinds of equality which would be applicable to any type, and the “equals” method is sometimes used to implement one and sometimes the other. Any object can meaningfully define two equivalence-relations with the questions: “Does reference X identify an object which is and will forever be in the same equivalence class as yourself”, and “Does reference X identify an object which will be in the same equivalence class as yourself unless or until you or it is modified by someone who has a reference to X or yourself”. Object instances that are going to be mutated should only be considered equivalent if they satisfy the first kind of equality, but instances that are never going to be mutated should be considered if they satisfy the second even if they are of mutable type. Generally, the holder of an instance will know if it’s ever going to be exposed to code that might mutate it, but the instance itself will have no way of knowing that.

    With regard to #5 (lambdas), I’m unconvinced of the value of allowing the argument types to be omitted. Return-type overloading was ruled out on the basis that it could make compilation more expensive, but as you have shown elsewhere the ability to omit argument types makes compilation NP-complete, and also increases sensitivity to the brittle-base-class problem.

    With regard to #6, if I were designing a C-like language, I would have expressions of the form `(int & int_constant)` yield a type which could be used as an `int` or implicitly cast to `bool`, and would also allow the syntax `if !(expression)`.

    With regard to #2, the best thing C# could have done with finalizers was completely ignore them. Let a class which needs a finalizer simply declare `protected override Finalize() {…};` That would have been easier than what the C# team actually did, and been more useful.

    With regard to #1, I would have liked to see some more kinds of array reference, including read-only and permutable (read-only, but with the inclusion of a methods to swap, copy, or roll elements or ranges thereof within the array).

    A few things I’ve wished for, not on your list:

    1. Either a means of letting a derived class set its own fields using constructor parameters prior to chaining the base-class constructor, or a virtual method in Object which .NET would call after the outermost constructor is completed, but before execution returns to the caller [perhaps change Finalize to ManageLifetime, and have a parameter indicate why the Framework was invoking it]. The latter would be the cleanest approach, but would require a feature not in the .NET Framework. If objects could refrain from exposing any references to themselves or acquiring any resources until ManageLifetime(ObjectLifetime.Creation) is called, that would avoid the presently-unavoidable need to have objects support virtual method construction before derived-class construction is complete.

    2. Rules for widening and narrowing conversions that would avoid the need for irksome and unhelpful typecasts when calling methods that accept “float” parameters, but warn in cases where a widening conversion may likely fail to yield an intended result (e.g. “double d = float1/float2; long l = i1+i2;” [warnings would also be given for “double d = (double)(float1/float2); long l = (long)(i1+i2);”; the warning-free versions would be “double d = (float)(float1/float2); long l = (int)(i1+i2);”

    3. Have events be regarded as a distinct type by the compiler, such that “eventName(params);” would be interpreted as “{var temp=eventName_delegate; if (temp_delegate != null) event(params);”. Given that “eventName += someDelegate;” and “eventName -= someDelegate;” are already handled specially, doing likewise for the invocation would seem fairly natural.

    4. I don’t know whether this limitation stems from .NET or C#, but it should be possible for a class to allow a constructor to be used to construct instances of that class without allowing the constructor to be used in the construction of derived instances. It should also be possible for a class to override and shadow the same base-class member (e.g. override “Car BuildCar() { return BuildToyotaCar();}” and shadow it with “ToyotaCar BuildCar() { return BuildToyotaCar(); }

    5. There should be an attribute which would identify whether particular struct methods or properties modify the underlying instance. Methods or properties which modify the underlying instance should be disallowed on read-only values, but property setters which do not modify the underlying instance (typically used with a structure that wraps an object reference and chains property setters to that reference) should be allowable on read-only structures.

  19. Most of C#’s problem stem from trying to be familiar to Java and C++ developers and inheriting the flaws of these languages instead of doing the right thing. Another part was the lack of generics in C# 1.

    My top pet peeves:

    1) Everything defaults to being null and mutable, which are the wrong defaults and cause most problems.
    2) It is very difficult to create value objects (i.e. not value types – objects that represent values): IEquatable, overrides of Equals, GetHashCode, operators, all of which can go wrong. It’s easy to mistakenly create mutually recursive versions of equality operators for example. This leads to “primitive obsession” (using string, int, float everywhere instead of domain-related types), causing endless confusion and bugs.
    3) Using void as the return type of methods that don’t return anything, creating the Action/Func split and resulting duplication of code everywhere.
    4) This is more of a CLR design issue, but C# has no good solution for resource management and is actually worse than C++ in this respect. RAII actually worked, but finalizers are fundamentally unreliable, and the Dispose pattern requires programmer discipline. Resulting in a plethora of bugs everywhere.

    F# fixes most of this except 4); I’m not sure we’ll ever have a good solution for resource management on the CLR.

  20. I love C# so this list is a bit of a nitpicking. Nevertheless…

    1. Nullability of reference type.
    I can’t remember the last time that I didn’t have to start a method by writing a whole bunch of null-checks, and the last time that I didn’t have to inspect the implementation of a method just to see if I can safely pass null in. If I want null, I’d say it, like I’d specify int? Why are reference types different?

    2. No default constructor for value types
    To be honest I don’t understand the rationale behind this. Too many times that I wanted to have a struct, but had to change it to a class for no other reason than to specify a default constructor. Having all fields initialised to their defaults does not necessarily make the object valid, and we don’t want invalid objects, which is why constructors were invented in the first place.

    3. Delegate types
    I simply don’t like delegate types. I’d rather stick to Action and Func types most of the time so I don’t have to worry about converting from one delegate type to another. I know I can convert between equivalent delegate types with Delegate.Create() but it’s ugly, and I always prefer wrapping in another lambda, which can be suboptimal.

    4. Generic constraint to enum and interface types
    Why can’t the compiler enforce generic constraints that limit a generic type parameter to an emum or an interface? Resorting to reflection at runtime is just not very elegant.

    5. Static classes
    Ok, I understand that this is more of a CLR policy, but creating “static class” is kind of silly. Static classes are as superfluous as they sound oxymoron.

    6. Enum type conversion
    If I have an enum, say, enum Colour { Black, White }, C# would then tell me the world is a millions shades of grey (cough) because you can always convert 42 to a Colour.

    7. Readonly keyword cannot be applied to local variables/method parameters
    Readonly can only be applied to class fields. Why can’t it be applied to locals? It seems like a dead easy feature to implement.

    8. Finalizer syntax
    If finalizers are so damn hard to implement right, and a Dispose() method is usually what you want, then why assign a syntatic feature (i.e. ~MyClass(){}) for the dangerous finalizer, but not the other way round? Wouldn’t it better if ~MyClass() is a disposer (with the compiler filling in all the dispose pattern stuff like in C++/CLI) and classes that require finalisation would need to implement a IFinalisable interface? My point is the fact that the finalizers get the better looking syntax tends to encourage the use of it over the other alternative.

    9. Everything is a System.Object
    I can’t see why this is necessary. To make C# look like Java? To make an excuse not to implement generics in its initial release? To be honest, I don’t have too much of a problem with the concept of a root type per se. I just don’t like the fact that System.Object is not abstract, the default implementation of the virtual methods (Equals, GetHashcode, ToString) is silly, and unlike other base classes, the compile fills in the inheritance for you.

    That’s it. This list can be significantly longer for some other languages.

    • Not sure I agree with #1-#2. If a reference type T isn’t nullable and doesn’t have a default constructor, what should “T[] arr = new T[1024];” do? For that matter, if T does have a default constructor, should it be necessary to call the constructor 1024 times even in cases where the array elements will be overwritten before they are ever read?

      With regard to #9, I think there should have been a means of declaring that certain value types should not be implicitly convertible to/from object, and ValueType should have included virtual box/unbox methods (if e.g. “Int64” included a virtual unbox method, then it could have said “if the boxed thing is an Int64, unbox it; otherwise if it implements “IConvertible”, call “ToInt64″; otherwise throw an exception”). A type like “String” could then have been a value type which encapsulates a reference to “StringObject”, but behaves like an empty string when the reference is null; boxing would yield the encapsulated “StringObject” if non-null, or StringObject.Empty if null.

      Going one step further, I would have liked to see a means of declaring that fields of certain value types should not be copyable (passable only as `ref` or `out` parameters), and local variables of that type could only be copied in circumstances that would be guaranteed to destroy the original. A variable of such a type that creates a new object, stores a reference internally, and never exposes it anywhere, would then be able to guarantee that it holds the only reference anywhere in the universe to that object, and thus changes made to that object would not be visible anywhere except through that variable–a very useful guarantee which could eliminate the need for “defensive copying” in many circumstances.

      • On your comments regarding #1: If type T cannot be default constructed what good does it do to allow you to have an array of invalid T instances? A lack of mechanism to use alternate constructor for array construction should not (IMHO) be an excuse for introducing default nullability for all reference types.

        And regarding your last paragraph, some languages (like Scala) allows you to declare a singleton. In C# you can more or less achieve that by means of static fields and methods.

        • How could something like List(T) work efficiently if it couldn’t have an array containing 1 valid instance and 15 invalid ones?

          It might be possible for formulate a workable set of rules that would allow an array of a non-nullable type to contain null elements provided they were never read, but there would need to be a way to move elements around without regard for whether they were null or not, and preferably also to invalidate elements. Such things are not impossible, but would probably require a more sophisticated type system than the one built into .net [not that such a type system wouldn’t be a good thing, but I can’t see such a thing emerging until the Next Major Framework comes out].

    • I hadn’t noticed the link to Eric’s own list and thought it would be in a later post. My apologies for not reading carefully and repeating some of the things he mentioned.

  21. Two things (not covered above)
    1. The lack of an INumeric interface that the numeric types (various integral and floating point types) would “implement”. It would simply act as a type facet that could be specified in a generic type property constraint as well a promise that the type implements IEquatable and IComparable.
    2. A couple of times I’ve wanted “enum inheritance”. Say I have an “enum Color” that implements the eight colors of the rainbow. Say then wanted an “enum WiderGametColor” that had those eight choices and say another 8. The problem is that the “inheritance” is backwards from what most people think (it’s a generalization, not a specialization, every Color is a WiderGametColor, but not vice-versa). Weird, but I suspect it would be occasionally useful. Certainly not on my top ten list (I agree with Eric that Array Covariance is number 1).

    • WRT your second item, it might be more useful for enums to implement interfaces than for them to inherit each other. I.e.

      interface IColorEnum {
      double Red{get;}
      double Green{get;}
      double Blue{get;}
      double Alpha{get;}
      }

      enum RainbowColor: IColorEnum { … }
      enum WiderGametColor: IColorEnum { … }

    • Being able to have enum types include code would be useful. Inheritance in the normal sense for non-flag enumerations, however, would often need to work backward from its normal behavior. If BasicAction is an enum with values from 0 to 5, and AdvancedAction adds values 6-10, code which expects an AdvancedAction should accept a BasicAction, but probably not vice versa.

      For flags enumerations, the situation would get more complicated. If BasicOptions defines values 1, 2, and 4, AdvancedFillOptions defines those values with the same meanings but also defines 8, 16, and 32, and AdvancedStrokeOptions likewise defines 1, 2, and 4 the same but gives 8, 16, and 32 different meanings from AdvancedFillOptions, then code expecting a BasicOptions which would ignore higher-order bits should be able to accept either of the others, and code expecting one of the others should be able to accept a BasicOptions that was “actually” a BasicOptions, but if an AdvancedFillOptions is given to code expecting a BasicOptions, and it in turn is passed to code expecting an AdvancedStrokeOptions, it would be necessary that something along the way strips off the upper bits.

      Probably the best remedy for all of that would be to allow enumerated types to define implicit and explicit conversion operators, and also allow “incorporation by reference” as a syntax shortcut (so that if “BasicOptions.Quack” is 123 and “BasicOptions.Moo” is 456, an enum “MyDeluxeColor” could with one line auto-define “Quack” as 123, “Moo” as 456, and likewise any other values from BasicOptions.) Such conversion operators could allow behavior similar to inheritance when appropriate, but without causing problems when it wasn’t.

  22. Pingback: Nullable comparisons are weird | Fabulous adventures in coding

  23. Great article. But one part confused me.

    While I have never really been a fan of the ++ and — operators in any language, I thought I, at least, understood them. You made some comments about them which made me question that but it seemed to be a very fine distinction which I wasn’t following. You said:

    “Next, almost no one can give you a precise and accurate description of the difference between prefix and postfix forms of the operators. The most common incorrect description I hear is this: “The prefix form does the increment, assigns to storage, and then produces the value; the postfix form produces the value and then does the increment and assignment later.” Why is this description wrong? Because it implies an order of events in time that is not at all what C# actually does.”

    After reading this I thought I’d go see how they were actually defined — I found a link on Microsoft’s site to a “C# reference” (https://msdn.microsoft.com/en-us/library/6a71f45d.aspx) and this is what I found:

    “x++ – postfix increment. Returns the value of x and then updates the storage location with the value of x that is one greater (typically adds the integer 1).”

    Isn’t that, essentially, what you said was the “most common incorrect description”?

    Also, you made the following statement:

    “Finally, many people coming from a C++ background are completely surprised to discover that the way C# handles user-defined increment and decrement operators is completely different from how C++ does it. Perhaps more accurately, they’re not surprised at all—they simply write the operators incorrectly in C#, unaware of the difference. In C#, the user-defined increment and decrement operators return the value to be assigned; they don’t mutate the storage.”

    While I have never created a user-defined increment or decrement operator in C#, I would have thought that they would do whatever the user programmed to do, including mutating storage if desired. You’re saying that’s not how they work? Can you point me to a good discussion of how they do work in case I ever find myself inclined to create one?

    Thanks, and thanks for the great article.

    • IMHO, .NET should have defined operators for `+=`, `-=`, etc. which accepted the destination as a `ref` parameter. This would have made it possible to have e.g. a `ThreadSafeDelegateHolder` where `holder1 += otherDelegate;` would yield:

      |    DelegateType t1,v2;
      |    do
      |    {
      |      t1 = target;
      |      t2 = (DelegateType)Delegate.Combine(t1, otherDelegate);
      |    } while (Interlocked.CompareExchange(ref target, t2, t1);
      

      There aren’t a whole lot of scenarios where such semantics are important, but there are some (like the above). The best way to achieve such semantics would be by having a `DelegateHolder` include an `AddDelegate` method, but unfortunately there’s no way to specify whether particular methods or properties of a struct should be invokable on read-only instances thereof; rather than squawking, the compiler will simply generate broken behavior in such cases.

    • First off, I thought I submitted a request years ago to have that documentation fixed. Either I forgot or someone dropped the ball. I’ll try to remember to follow up with the documentation manager. Thanks for the note.

      For a detailed explanation of how the ++ user-defined operator differs in C++ and C#, see my article here:

      http://blog.coverity.com/2013/09/24/increment-semantics/

      Basically: the designers of C# did not want to allow the *assignment* semantics to be overridden. So what is the only interesting part of the ++ operator then? The computation of the “add one” step. So that’s what the user-defined operator does: returns the “next one”. The compiler generates the fetching of the original value, the call to the “next one” operator, the storage, and the production of the correct value, all on your behalf.

      • While it’s very useful to have a very large family of types for which assignment is a bitwise copy, requiring that *all* types support such behavior is not such a good thing. If one holds a reference to an object, and one wishes to hold a reference to a slightly-different object, but must ensure that nobody can observe that the original has changed, there are two ways that can be accomplished:

        1. Make a near-copy of the object which is slightly different from the original, and replace the reference to the original with a reference to the copy.

        2. If no other reference to the object exists anywhere in the universe, simply change the object directly.

        In the cases where the second option is applicable, it will be semantically-identical to the first, but in many cases may be much faster.

        In general, trying to keep track of all references that might exist to an object will be an intractable if any freely-copyable references to it have ever been exposed to code that could copy them. There are many situations, however, where adding some reasonably simple language/framework semantics would make it possible to maintain as an invariant that only one reference to an object can possibly exist. The ability to maintain such invariants can allow some major performance enhancements, and can also help ensure correctness in cases where resources are required to have exactly one owner.

        What I’d like to see would be a means by which a type could specify that it should be freely copyable, that it should be non-copyable, or that it should support variables of both types, with a method that would be called any time code would need to copy a non-copyable value (the method would, unlike a normal conversion operator, receive a byref to the non-copyable value rather than a copy of it). The “everything behaves like a reference” is convenient for a framework implementer, but there are a lot of cases where it makes more sense to have things behave as values. While a reference to an immutable object will behave semantically as a value, real values are often much easier to work with efficiently.

  24. Great list and I agree with almost all of it. There are some nice additions in the comments as well.

    Eric – it seems to me that many of these could be fixed without monumental effort if it weren’t for backwards compatibility reasons (in C# and in the CLR). I know the C# team is almost religious in their view that the language needs to support old code, but don’t you think a time will come when enough of these issues have piled up and it is time to start deprecating features? If not, I think people will start abandoning C# in favor of more modern languages with fewer “flaws”.

    If you would speculate, do you think this will ever happen or will C#42 still be compatible with C#1? Have your views on this changed since you left the C# team?

    Thanks for a great blog.

Leave a Reply to Pavel Voronin Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s