I
was intrigued by Neil Deakin’s recent post “urn:schemas-microsoft-com:office:office” /> where
he says that when he was a young user, he got mad about all kinds of things that,
after a few years as an implementor, he didn’t feel mad about anymore.
I’m
with ya, Neil. Many of my computer geek
high school friends were rather surprised to learn that I had taken a job at Microsoft,
given my earlier criticisms of all x86 based software. (I
was an Amiga partisan as a teenager.) I feel the same way you do — I was a naïve
teenager.
Neil,
however, doesn’t actually explain why it is that he finds his former beliefs to be
naïve. I can’t speak for Neil, but I
can take a stab at explaining what the revelation was for me. Basically
it comes down to realizing two things: (1)
good design is a series of difficult compromises,
and (2) software is by its nature incredibly complex. I
failed to appreciate either of those facts until I’d actually done a lot of it myself. Once
I appreciated these things, I understood that we have imperfect software BECAUSE the
people who make it are dedicated to writing quality software for end users, not in
spite of that fact.
And
it doesn’t matter what your business model is — open source or closed source, give
away the software and sell consulting, shrink-wrapped boxes, micropayments, freeware,
whatever. The money story is irrelevant;
all software development is about coming up with an implementable, usable design
for a given group of end users and then
finding enough resources to implement that
design. There are only a finite number
of programmers in the world, and way more problems that need solving than there are
resources available to solve them all, so deciding which problems to solve for what
users is crucial.
Sometimes
— not always, but sometimes — not fixing a trivial, obscure bug with an easy workaround is the
right thing to if it means that you can spend that time working on a feature that
benefits millions of customers. Sometimes features that are useless to you are life-savers
for someone else. Sometimes — not always,
but sometimes — using up a few more pennies of hard disk space (ten megs costs, what,
a penny these days?) is justifiable. Sometimes
— not always, but sometimes — proprietary designs allow for a more efficient design
process and hence more resources available for a quality implementation. These
are all arguable, and maybe sometimes we humans don’t make the _best_ choices. But
what is naive is to think that these are not hard problems that people think about
hard.
When
I was a teenager, I thought software engineering was trivial — but my end user was
myself, and my needs were simple. Software
development doesn’t scale linearly! Complex
software that exists to solve the real-world problems of millions of end-users is
so many orders of magnitude harder to write, that intuitions about the former can
be deeply misleading.
And
this is true in ALL design endevours that involve tough choices and complex implementations! I
was watching the production team commentary track for the extendamix of The Two Towers
this weekend, and something that the producers said over and over again was that they
got a lot of flack for even slightly changing the story from the book. But
before you criticize that choice, you have to really
think through all the implications of being slaves to the source material. How
would the movie be damaged by showing Merry and Pippin’s story entirely
in flashback? By not interlacing
the Frodo-and-Sam story with the Merry-and-Pippen story? By
making Faramir’s choice easy? By adding Erkenbrand as the leader of the Westfold? Yes,
these are all departures from the book, but they also prevent the movie from being
confusing and weak at the end. None are
obvious — they’re all arguable points, and ultimately someone had to make a tough
decision to produce a movie that wasn’t going to satisfy everyone fully, but would
nevertheless actually ship to customers!
There
is no perfect software; the world is far too complex and the design constraints are
too great for there to be anything even approaching perfect software. Therefore,
if you want to ship the best possible software for your users, you’ve got to carefully
choose your imperfections. As the guy
who writes tools for you software engineers, my mission is to make tools that afford deeper
abstrations and hence require fewer developer
resources to implement. That’s the
only way we’re going to get past the fundamental problems of complexity and expense.
Tags Rants
Comments (6)
You must be logged in to post a comment.
- MartijnOK, about that software geek thingie, you are probably right, I can’t tell, but please don’t say they had to make all these changes to the Lord of the Rings book.
Peter Jackson has a pretty good insight what the books where about, but I think that he has made some fatal errors in the second movie, by changing the story.
I think the book is all about choices (what do you do when you are confronted with evil/power), and Peter has changed that behavior of some characters in the movy in a really bad way. - Log in to Reply
- November 24, 2003 at 8:11 pm
- DanielSo what are your thoughts on this:
http://www.fastcompany.com/online/06/writestuff.htmlcan NASA can get perfect software ? - Log in to Reply
- “This software never crashes. It never needs to be re-booted. This software is bug-free.”
- November 24, 2003 at 10:02 pm
- silI agree entirely that your eyes are opened when you get into collaboratively developing code with a big team and putting together proper stuff rather than one-shot code for yourself, absolutely. However, the shift from user (or one-man coder) to software developer can also bring with it less of a focus on usability, because you forget everything that you didn’t know as that user; some stuff is just so *obvious* as a hacker that it never occurs to you that it’s confusing or non-obvious to others. I’m not necessarily suggesting that you’re guilty of this, Eric, but there’s not a lot of difference between “you don’t understand how putting software together works — you’re naive and should learn” and “you don’t understand how software works — you’re naive and should learn”, I feel. Dangerous ground, where it’s easy to sell out those youthful passions and compromise too much…
- Log in to Reply
- November 25, 2003 at 2:57 am
- Stuart DootsonDaniel – no, NASA cannot get perfect software..or rather, they *may* get perfect software, but they cannot know it – they cannot know if there are still defects in their software, unless they’ve decided to leave them in.Log in to Reply
- The other thing to think about is that by deciding to try and minimise the number of bugs in their software, NASA have effectively decided that they are going to spend an awful lot of money on what is effectively a small functionality set. They probably spend 10-100 times what Microsoft (or any other PC software vendor) spend per function point – possibly more. The testing budget is probably 50-60% of the total budget. (and yes, I’m speaking from experience – I’ve developed safety-critical software to DO-178B Level A).
- November 25, 2003 at 4:15 am
- Peter TorrIt would be more correct to say “This software has never crashed”. Also Eric is talking about building software for millions of people who do arbitrarily complex things with it, combine it with other arbitrary software and hardware, and so on. The Shuttle group has an incredibly narrow focus for their software: It must launch the shuttle, and that’s it.Log in to Reply
- I bet they don’t let the crew watch DVDs or run the SETI screen saver on the same machine that fires the engines 😉
- November 25, 2003 at 12:41 pm
- Mike DimmickThe space shuttle software also has an extremely limited range of hardware that it has to run on, while Windows has to run on the entire gamut of PC hardware.Log in to Reply
- NASA reportedly had a bit of a crisis recently because they weren’t able to source 8086 processors for the space shuttle computers.
- November 25, 2003 at 2:48 pm
Follow Us
Popular Tags
C# Scripting JScript VBScript Language Design COM Programming Rarefied Heights Puzzles Rants Performance Security C# 4.0 Non-computer SimpleScript JScript .NET Immutability Code Quality Pages Recursion Books
Archives
- November 2012 (5)
- October 2012 (5)
- September 2012 (2)
- August 2012 (5)
- July 2012 (2)
- All of 2012 (47)
- All of 2011 (66)
- All of 2010 (90)
- All of 2009 (100)
- All of 2008 (43)
- All of 2007 (63)
- All of 2006 (25)
- All of 2005 (97)
- All of 2004 (167)
- All of 2003 (88)
Good essay, but there are two burrs marring it for me.
1) You say, ” …but once you bought the software, it was yours.” Well, no, that’s not how I’m seeing things, and I’m shocked that as a Microsoft employee you’re spreading *that* mis-truth. Microsoft is *well known* for taking pains to make it known that no, we don’t own the software we bought. However, it’s not relevant to the main thrust of your essay. But it’s still a burr that needs sanding down.
2) The essay boils down to, and you said it: “use the right tool for the right job.” While it’s good to conduct an overview of rich vs. thin client models the way you did, this is one of many such rants I’ve seen online, and it’s getting boring. Far better for us to start writing “why are people not choosing the right tools for the right job?” and “With those answers, how are we going to change things to make it better?”
You know, thinking back, I think the last time I made an anonymous post to a public forum, the Amiga 500 was the cutting edge of technology. Any particular reason why you’re unwilling to put your name on your opinions? I promise to not sign you up for those “bill me later” magazine subscriptions.
> we don’t own the software we bought
The nuances of intellectual property law and the rights of property holders are fascinating topics which I have no intention of getting into at this time. My point, perhaps poorly expressed, was simply that once you bought the software, obtaining updates was typically a matter of buying more shrink-wrapped boxes. This is in contrast with the web model, where ownership is even more nebulous and updating is trivial.
> it’s getting boring
If my frequently updated, highly technical and excessively detailed writing bores you, I invite you to read one of the billion or so other blogs now available on the internet.
> Far better for us to start writing “why are people not choosing the right tools for the right job?” and “With those answers, how are we going to change things to make it better?”
Good questions.
My mission is to throw developers into the Pit Of Quality. Historically, developer tools make developers climb the Hill of Quality — you have to do lots of work to ensure quality software, because the tools can easily work against that goal. Instead, we must make it HARD to write bad software.
Consider C++ — you want to write a buffer overrun in C++, you can do so in about 30 seconds. Want to write a buffer overrun in C#? You’ve got to WORK IT, baby! Writing buffer overruns in C# is HARD.
That’s how we make it better — with a combination of user education and tools which lead developers to good practices by their very nature.
This is an EXTREMELY difficult design problem, and we will never be done. There is _always_ room for improvement in the tools space. (My job is secure!) But we are certainly working on it. In fact, the usability PM across the hall from me is about to embark upon a detailed study of how developers make mistakes using Visual Studio, how they realize they’ve made a mistake, and how they correct it. I’m sure it will be a fascinating study.
Choosing the proper tool for the job is always sound advice, and there is a lot of truth in the comments you make. To the list of distinctions between languages for “thin” vs. “fat” clients I would add behavior in the face of adversity. When encountering an unexpected situation, or even an error, a language like JavaScript should “try to get along” by doing the “right thing” – the spirit of HTML. With application development languages like C++ and C# I would prefer a crash over making any sort of assumptions about my intentions.
Where I do take a bit of an exception with your post is that it’s very black and white. I believe there are scenarios where the distinctions between client and server can become blurred. Essentially, you can have the client and the server running together on the same computer, even within the same process. I’ll give two examples of applications I’ve designed and developed.
The first application, which I’ve mentioned here before, implemented its UI using HTML and JavaScript. Underneath it utilized an ActiveX control, pushing in data as an OSP. I think the combination of a COM control doing the heavy lifting beneath an HTML front-end is/was a very powerful one. I remember Microsoft trying to push this model very hard in the 97′ PDC. I never understood why it hasn’t caught on.
Another project had a stand-alone application that contained the browser control. The UI was generated by applying XSLT to XML in order to generate DHTML powered by JavaScript. The container also exposed methods to the JavaScript through window.external.
In both cases the combination was very powerful because C++ provided power and stability, and JavaScript ease of development and flexibility. Also, in both cases the customer could modify the UI extensively because the XML/HTML/JavaScript portion was provided in source format.
>>…what I call “frog boiling”. You throw a frog into a pot of boiling water, it jumps out. You throw a frog into a pot of cold water and heat it up slowly, and you get frog soup. <<
Another internet myth.
http://www.snopes.com/critters/wild/frogboil.htm
> My mission is to throw developers into the Pit Of Quality. Historically, developer tools make developers climb the Hill of Quality — you have to do lots of work to ensure quality software, because the tools can easily work against that goal. Instead, we must make it HARD to write bad software.
Consider me a pessimist, but I believe producing quality code will always be an uphill battle. For example, while switching from C++ to C# or Java helps you avoid buffer overruns, you introduce whole new sets of potential problems. Check out this C++ snippet:
for ( int i = 0 ; i < 1000000 ; ++i ) {
char buffer[1000];
…
}
Now lets naively translate this to Java:
for ( int i = 0 ; i < 1000000 ; ++i ) {
char[] buffer = new char[1000];
…
}
Can you see the problem here? I’ve uncovered this tye of problem several times in production code. And to make matter worse, unless you review or profile the code you may never find this problem.
Other issues also come to mind, such as determinate finalization (yes I know about the using keyword, but it’s still much easier to do in C++). And if GC makes life so much easier, why did MSDN magazine run a three part series on the ins-and-outs of the .NET GC?
Don’t get me wrong, I do think C# and .NET are a step in the right direction (unless you are a LISPer or SmallTalk hacker in which case you might consider it a step back 😉 I just thing it’s a never-ending road.
I’m still wondering why we have concept like thin and fat clients.
It would be nice if code was mobile and just migrated to where it could most efficiently execute.
Ie DBMS access on some server, client redraw on client, other code moves heuristically around depending on the execution circumstances.
Anu, I am sure there are many reasons for this, not least of which is our favourite topic, security. We (as an industry) can’t even get security “right” for well-defined systems, let alone systems in which the threat models change radically depending on where the code is. For example, moving code off the web server and onto the client poses a whole new set of issues (hint: the server can’t trust anything that comes from the client, and vice-versa).
Eric, in my opinion you’ve not looked closely at some of the received wisdom you treat as fact.
You state, for example, that any serious development language “must have a rigid type system”, and that it “must support compilation”. Though these statements represent conventional wisdom, I’m not certain they stand up to careful and open scrutiny.
Take the rigid typing issue, for example. I’d suggest that our obsession as an industry with rigid typing is a sort of self-fulfilling prophecy — that the insistence on rigid control over data type is in fact the source of so may challenging type-related bugs. In more than 5 years of use of JavaScript for serious application development, I’ve learned (much to my surprise!) that by eliminating data type as a subject of constant concern, the entire collection of bugs relating to type mismatch disappears. We’ve implemented very systems with very substantial JavaScrpt front ends and have only once suffered a meaningful bug relating to data typing. (And, actually, that one was really a result of JavaScript’s unfortunate overloading of “+”). In our experience, the management of data types is something that can be safely left to the interpreter.
Similarly, consider your requirement for compilation. The compiled-code environment tends to discourage the kind of careful, incremental build-and-test methodology one can follow in a purely interpreted environment. In the compiled environment, developers tend to build larger chunks of functionality in between bouts of testing, and the size of those chunks rises in proportion to the complexity and delay involved in compilation. That, in turn, leads to more bugs and greater unwillingness to change course when a concept doesn’t work quite as well as expected.
Your reasonable counter to these arguments is one you address in your article — performance. Interestingly enough, this has never been an issue in any of our applications (though it’s caused a couple of struggles, particularly with IE’s reluctance to render large tables rapidly). After all, how much horsepower is really required to handle most user-side tasks? In the course of our life with JavaScript, the performance of the underlying platform has more than wiped out any advantage we might have gained, had we started in the compiled environment. Looking forward, this still seems to be a reliable expectation.
As one of the other responders mentioned, my wish is that Microsoft and other vendors would recognize that many of us have taken the HTML+Javascript path for a good reason. The simplicity of Web distribution and the ability to distribute real business applications to any desktop is part of it. The efficiency, simplicity, and downright elegance of JavaScript is another part. Finally, the ability to hand off a while collection of UI management tasks to the best rendering engine that’s ever existed — the modern browser — removes a significant load from the developer and is still another factor. Rather than the .Net model (a truly superb job, by the way) of heavyweight development environments, enormous demand on developers, and cumbersome development and deployment models, why not find a way to add professional tools and environment to HTML + Javascript?
Interesting post, Andy. One thing about typing: you say that it is not a problem as long as the developer is careful not to make assumptions about the type of data. True – and we wouldn’t need garbage collection if developers were careful with their memory allocations, and we wouldn’t need checking on array bounds if developers were careful with their indices and buffer sizes, and so on.
What we’ve learnt (as an industry) is that developers are only human, which means they make mistakes and are sometimes lazy / careless / in a rush to ship a product out the door. Anything the system can do to catch these problems as early as possible is A Good Thing ™
See more at:
http://blogs.gotdotnet.com/ptorr/commentview.aspx/eccc1d2e-fb94-43a0-bf43-5adadb5f579f
We can get sillier than that. We wouldn’t need variables if programmers would just keep track of their pointer values… heck, we wouldn’t need programming languages at all if programmers would just talk straight to the chips in binary like they used to…
Writing software in the large is all about _managing complexity_, and static type checking is a great way to manage that complexity. My point is simply that if you have a lot of complexity to manage, use the right tools.
This post is right up my alley. For 3 years I worked on a prototype client runtime system built around HTML, JavaScript and XML (yes for its time it was very buzzword compliant!) called Sash. We always thought that blurring the distinction between thin client web apps and fat client desktop apps was the sweet spot for developers to be in and our entire platform was built around that idea. We were able to realize significant development time returns because most of the heavy lifting was done by COM objects hosted in a consistent manner with a very granular security model. We built all manner of very rich client app all on top of JavaScript and HTML, so I think that some of the distinctions you make in your post, while valid, don’t take into account the fact that most developers who are using such a system are disciplined enough to practice good modularization and make the language and runtime work for them. I guess my point is that it really comes down to developer discipline and not the language model when you are trying to design any app (thin, rich, fat, bloated…whatever). As ana aside I think the term we used for our model was muscle client 😉
I have also been involved in creating a “rich client” for IE using JavaScript, DHTML and XML. This product has been shipping for almost 4 years and is very stable. Yes, we have had to deal with those nasty performance issues, but they seem no worse than in other environments (except for the lack of tools). Sounds programming saves yourself in any environment.
The one issue that I want to bring up is that, in your history overview, you jump right from a discussion of thin-clients to your vision of a rich client using a technology that doesn’t exist yet. What about those products that had to be built in the in-between time? From say, 1998-2005?
We cannot wait to build the products we want to build until Longhorn ships. Our product was started in late 1999 when there were really just a few options, Java, ActiveX or JavaScript and DHTML. I stand by our decision to use these technologies, they were the best out there (and still are).
Now, this does not mean that I am not looking forward to moving to .NET and newer technologies if/when they ship. 😉
Interesting article Eric. I’ve been involved with the development of rich GUIs built using web technology for some time–including one of the leading CRM packages available today. I agree that JavaScript and DHTML are not the best tools for rich client problems. But unfortunately the hype around web-apps has led to the fact that most enterprise application software (Oracle, Siebel, PeopleSoft…) sold today have web-based UIs.
There are several reasons for this. First, easy distribution of updates of web apps addressed a need that wasn’t being addressed sufficiently by Java or .Net. Until the .Net runtime is as pervasive as IE on end-user desktops, IT groups in big organizations will prefer web-based apps for this reason alone. Another factor is that web-based UIs are, as you point out, fast and easy to develop. This has effects that are lost on companies that made their money in the shrink-wrap past. Web UIs can be prototyped quickly (independent of the server-side code), making it easy to get user and customer feedback. Web-apps are also easier to integrate with the web-based intranets found in today’s companies. IT staff are typically more comfortable customizing web-based apps, something most businesses do with enterprise application software that they don’t do with Word or Excel.
HTML and JavaScript skills are more common than VB, C#, or C++ in IT departments and for that matter in most software companies today. Unfortunately, software developers and IT staff tend to pick the language for a project based on expertise, not suitability. That will probably stay that way until languages and tools become so easy to learn that programmers stop linking their professional careers with them. I look forward to that day. When that happens, people in our profession will identify more with the domain they work in (e.g., healthcare) rather than the tools/technology (C#, Jscript) they use.
Longhorn XAML is simple expandable browser model(.NET powered=hosting/creation .NET objects as first class BOM elements/objects,.NET+DOM infoset mix), so what you are talking about?
More powerful hosting environment(smart client anyone?)-more powerful client/server/…
And any developer(experienced in particular) will choose most easy/flexible way to do the job-always,dosen’t matter that’s not right in someone opinion.
expando properties/attached behaviors-I still miss about it in Longhorn(even if this functionality is present-it’s implementation is not transparent enough(Dependency properties etc.) compared to IE+JavaScript and THIS is BAD.
I love this topic and agree with many of the posts here. I like to consider myself a developer who – before implementing any architecture/technology – understands it.
I’ve recently been developing an IE/Javascript/DHTML/XML framework that takes advantage of the MS XMLHTTPRequest object and Inner-Browsing. In the process I’ve learned quite a lot about JS and rather like it. I had previously thought it was a toy language – but its OO abilities, among others things, are rather kick-butt.
On the surface – a number of things are GOOD:
1. fast development cycle.
2. The ability to create a rich UI via DHTML (I spent over 9 years doing DOS, Win3.1, Win95, Win2K multi-media apps and can recite almost every GDI API there is. The amount processing going on to change the class for a mouseover event amazes me…)
3. The ability to maintain state on the client via XML data islands
4. Synchronous posts/gets via the XMLHTTPRequest object.
5. Few to no ‘back-button’ issues
6. The server code gets simpler and thus more scalable because it essentially just authenticates, saves and fetches XML.
7. No more complicated code saving state at the server and re-creating the entire page. (Less bugs)
So what’s wrong ? Well, if JS has the memory leaks I read about – then what is the point ? Why aren’t tools available to help pin-point when a leak occurs ? Why aren’t there good IDEs for JS debugging ? I’d like to know if this is a viable frame-work before preceding…
Anyone develop an industrial app like this ?