Past, present, future

Thanks to everyone who came out to the on-the-web and in-person events last week; it was an exhausting week but I had a great time. I’ll post the links to the recorded versions of the talks once I have them.

Developer Tech was kind enough to ask me to write a few words about the past, present and future of C#; this article is mostly pitched at people unfamiliar with the history of the language. If you read this blog you probably know all this already, but if you want to check it out, the article is here: http://www.developer-tech.com/news/2014/jan/30/past-present-and-future-c/

Advertisements

22 thoughts on “Past, present, future

  1. Truthfully, I think the degree to which C# borrowed from Java is understated in your article.

    Apart from the obvious similarity between C# 1.0 and the Java at the time, there’s also the timing between the discontinuing of J++, the release of C# and the Sun-Microsoft lawsuit concerning J++. Also, I understand the C# lead architect came from J++.

    Given all that, surely C# is more the result of Microsoft having this Java product, and a team behind it, but no longer being able to use Java trademarks. Microsoft would surely have known the likely outcome of the lawsuit before it was ever filed – that was fairly obvious, I predicted the outcome correctly the first time I heard about it – and considered what to do with their investment in the J++ product. C# is surely the result.

    It’s innovated alot since then, absolutely, and leapt ahead of Java in terms of language features for professional application development.

    • The points you make are tactical and strategic; as I noted, it is certainly the case that C# was a strategic move in the context of the ongoing situation with Sun. But from a language design point of view it is more accurate to say that both C# and Java are responses to what is considered good and bad about C++.

      • While C# and Java are both offshoots of C++, some of the decisions in C# seem to have been motivated by the way Java did things, including my biggest pet peeve (the need to explicitly typecast “double” to “float”, even when the requirement makes code needlessly brittle) . Further, while C++ makes a clear distinction between “foo->bar=3” and “foo.bar=3”, C# follows Java’s lead in dropping the latter syntax and using the former instead (even though .NET–unlike the JVM–has the tools necessary to benefit from the distinction). The JVM is designed to be simple, and Java fits it well. The .NET runtime is more sophisticated, but C# doesn’t benefit as much as it should from that sophistication.

        • I’d add to that the decision to replace multiple inheritance with single inheritance plus interfaces, which as far as I know was a Java innovation.

          Also losing C++’s const, which I miss in both languages. From Java’s point of view, I guess it makes sense, although I don’t like it – Java is intentionally a minimalist OO language, and const places a burden of work and of learning to anyone using the language.

          C# has never even from the start aimed at being so minimalist, as far as I can tell, so I don’t know why it’s not there. Maybe everyone else thinks const is more of a PITA than I do, or more likely IMHO it’s the Java language-design influence.

          • Plus of course the decision to run code in a virtual machine. That’s a pretty big deal, although maybe not language design per se. I appreciate that the CLR is a different beast to the JVM, with some different design considerations, but they’re more similar to each other in concept than either are to native code.

            Also garbage collection, although it’s not rare in post-C++ languages.

          • The omission of `const` correctness is part of a more general problem, which is that .NET really has only one means of letting a called method work with a class object: by passing a promiscuous object reference. In C++, by contrast, one has a choice of whether to pass a `*foo`, a `&foo`, a `*const foo`, or a `&const foo`. And of course, a `foo` (which would represent a copy of the object). Although failure to consider the future when selecting which kind of parameter to use may cause one to get “stuck” later, using promiscuous object reference parameters doesn’t solve the problem but makes it worse. If a reference to a mutable-class object is passed into a method, correct and efficient coding will require that the authors of the method and the caller both know who (if anyone) will be allowed to modify the object in future, and what such modification should mean. The fact that a compiler won’t squawk if a called method uses a reference in a way contrary to the caller’s expectation doesn’t mean the code will work. It will simply make the compiler allow invalid code.

          • I am on record numerous times on StackOverflow noting that C/C++-style const correctness is a pretty weak language feature. The guarantee that I want to be made by the type system is that if something is a constant Blah then it will not be observed to change. That’s not at all the guarantee that the language gives you; a constant Blah is rather a Blah that *you cannot change via a given reference*! A type should be an invariant; when you say that a variable is of type string then the invariant is that any valid reference in that variable really is to a string. But const type annotations are not invariants; rather, they are restrictions on what operations you can perform via that reference.

            Const in C/C++ is like ReadOnlyList; it doesn’t enforce that the underlying list is immutable. Rather, it gives you a facade that does not allow mutation. The name is deliberate; it is a list that can only be read, not a constant list.

          • I am on record numerous times on StackOverflow noting that C/C++-style const correctness is a pretty weak language feature. The guarantee that I want to be made by the type system is that if something is a constant Blah then it will not be observed to change.

            Well, C# const does give you what C++ does not in that respect, certainly. From what I can tell, the cost is you are restricted on what types you can apply it to – value types, strings and null, if I’ve understood correctly. Whether the cost outweighs the benefit is a matter of taste and of priorities.

            That’s not at all the guarantee that the language gives you; a constant Blah is rather a Blah that *you cannot change via a given reference*!

            I do find that handy, though. When the compiler points out to me that I have done that, it generally means I’ve forgotten my design for that piece of code, and reminds me of what I planned to do. I am completely absent-minded and do find that helpful.

            A type should be an invariant; when you say that a variable is of type string then the invariant is that any valid reference in that variable really is to a string. But const type annotations are not invariants; rather, they are restrictions on what operations you can perform via that reference.

            Absolutely, yes. I deliberately tried to not refer to const as part of C++’s type system.

            Const in C/C++ is like ReadOnlyList; it doesn’t enforce that the underlying list is immutable. Rather, it gives you a facade that does not allow mutation. The name is deliberate; it is a list that can only be read, not a constant list.

            Absolutely true. Of course, if one persues the thought, “Well, how could my language support me in writing a ReadOnly version of every class I write, without the onerousness of doing it by hand in full?”, then one (or at least I) pretty quickly arrives at C++ const.

          • In the absence of a runtime-backed immutable array type (which IMHO good frameworks should include for all types of elements–not just characters), there’s no way to make a collection which is deeply immutable from a type-system perspective, that also an arbitrary number of elements to be retrieved with constant access time. On the other hand, an object *instance* whose reference is never shared with anything that could possibly mutate it will be immutable, even if its *type* is not. Being able to pass a kind of reference which the recipient could not use to modify the target would assist with that, though for it to be most helpful a class should be able to specify that certain methods or properties are only available via non-restricted reference, and should also be able to have method overloading consider the type of reference used to access a method (so that a property of an unrestricted reference would yield an unrestricted reference, while one of a restricted reference would yield a restricted reference).

            If someone were to design a new framework, taking lessons from Java and .NET into account, I think an important feature would be to, for each object type, have multiple types of reference relating to different combinations of traits: a reference may encapsulate value or identify an entity; it may encapsulate exclusive ownership, long-term shared access (to either an immutable value or a shared entity), ephemeral access (reference cannot outlive scope), or guarded-secret access (may not be stored anyplace which would be accessible outside the class-object instance receiving it). Such traits should be part of each *reference* type, so that generic collections could know whether they hold entities, sharable values, or unsharable values (copying a collection of unsharable values must copy the nested values, but copying a collection of shared entity references must copy the references). Although the GC eliminates the need for the owners of value-holding objects to clean them up, it doesn’t eliminate the *usefulness* of tracking ownership. Mutable value objects should have exactly one well-defined owner; while it may be hard for a language to track all the references to an object, it should be possible for a language to help ensure that unshared value objects cannot accidentally turn into entities.

          • As an old-time Delphi programmer I need to point out that Anders’ previous language, whilst he was a Borland, was Delphi. It had single inheritance using object as a base class, etc. In v3 (probably mid 90’s since v7 was around 2002 IIRC) they added support for interfaces. Originally this was to support COM but the interface support was still useful in its own right.

          • According to http://en.wikipedia.org/wiki/Embarcadero_Delphi, Delphi 3 was released in 1997, when Java was on release 1.1.

            BTW, I am not claiming Java was the first language with single inheritance and interfaces, nor that Anders copied it from Java or copied it at all, I don’t know. In my experience, lots of language ideas turn out to have been invented surprisingly early, and spent decades in academia before they break out into the mainstream.

            I think OO and functional programming were both invented before I was born, which just seems weird.

        • Undoubtedly the designers of C# were very familiar with the design choices in Java; it would be foolish to implement any C-like language these days without knowing the pros and cons of Java! But C# 1.0 was neither “Java with the stupid parts taken out” nor “Java with reliability and safety removed” as many biased commentators stated back in the day.

          • How about, “Java, preserving the sacred tradition that programming languages should be named with a capital C, followed by punctuation”?

            I kid, I kid.

  2. Hi there! I know this is kinda off topic however , I’d
    figured I’d ask. Would you be interested in trading links or maybe guest writing a
    blog article or vice-versa? My website discusses a lot of the same topics as
    yours and I feel we could greatly benefit from each other.
    If you are interested feel free to shoot me an email.
    I look forward to hearing from you! Terrific blog by the way!

  3. I loved as much as you will receive carried out right here.
    The sketch is attractive, your authored subject matter stylish.
    nonetheless, you command get got an shakiness over that you wish be
    delivering the following. unwell unquestionably come further formerly again since exactly the
    same nearly very often inside case you shield this hike.

  4. I usually do not drop many remarks, but i did some searching and wound up
    here Past, present, future | Fabulous adventures in coding.
    And I do have 2 questions for you if it’s allright. Is it only me or does it look like a
    few of the comments look as if they are left by brain dead visitors?
    😛 And, if you are writing on other online social sites, I would
    like to follow everything fresh you have to post. Could you list of all of your shared sites like your linkedin profile,
    Facebook page or twitter feed?

  5. Hey there! I understand this is somewhat off-topic but I had to ask.
    Does building a well-established blog such as yours
    take a massive amount work? I am completely new to blogging however I do
    write in my diary every day. I’d like to start a blog
    so I can easily share my experience and feelings online.
    Please let me know if you have any kind of recommendations or tips for new aspiring blog owners.
    Appreciate it!

  6. That is realy attention-grabbing, You’re a very professional blogger.
    I’ve joined your feed and look ahead to in quest of more of
    your excellent post. Also, I’ve shared your website in my social networks

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s