Thin To My Chagrin

I’m going to take a quick intermission from talking about the type system, but we’ll pick it up again soon.  I’ve been thinking a lot lately about philosophical and practical issues of thin client vs. rich client development.  Thus, I ought to first define what I mean by “thin client” and “rich client”. 


We used to think of Windows application software as being shrink-wrapped boxes containing monolithic applications which maybe worked together, or maybe were “standalone”, but once you bought the software, it was static — it didn’t run around the internet grabbing more code.  If the application required buff processing power and lots of support libraries in the operating system, well, you needed to have that stuff on the client.  The Windows application model required that the client have a rich feature set.

This is in marked contrast to the traditional Unix model of application software.  In this model the software lives on the server and a bunch of “thin” clients use the server.  The clients may have only a minimal level of memory and processing power — perhaps only the ability to display text on a screen!

The ongoing explosion of massive interconnection via the Internet starting in the 1990’s naturally led software developers to rethink the monolithic application model.  Multi-tiered development was the result — the front end that the user sees is just a thin veneer written in HTML, while the underlying business logic and data storage happens on servers behind the scenes.  The client has to be quite a bit fatter than a Unix “dumb terminal”, but if it can run a web browser, it’s fat enough. 

This was a great idea in many ways.  Multi-tiered development encourages encapsulation and data locality.  Encapsulating the back end means that you can write multiple front-end clients, and shipping the clients around as HTML means that you can automatically update every client to the latest version of the front end.  My job from 1996 to 2001 was to work on the implementation of what became the primary languages used on the front-end tier (JScript) and the web server tier (VBScript).  It was exciting work.

Right now, we’re looking to the future.  We’ve made a good start at letting people develop thin-client multi-tiered applications in Windows, but there is a lot more we can do.  To do so, we need to understand what exactly is goodness.  So let me declare right now Eric Lippert’s Rich Client Manifest

The thin-client multi-tiered approach to software development squanders the richness available on the vast majority of client platforms that I’m interested in.  We must implement tools that allow rich client application developers to attain the benefits of the thin-client multi-tiered model.

That’s the whole point of the .NET runtime and the coming Longhorn API.  The thin client model lets you easily update the client and keeps the business logic on the back tier?  Great — let’s do the same thing in the rich client world, so that developers who want to develop front ends that are more than thin HTML-and-script shells can do so without losing the advantages that HTML-and-script afford. 


I’ve been thinking about this highfalutin theoretical stuff recently because of some eminently practical concerns.  Many times over the years I’ve had to help out third party developers who have gotten themselves into the worst of both worlds.  A surprisingly large number of people look at the benefits of the thin client model — easy updates (the web), a declarative UI language (HTML), an easy-to-learn and powerful language (JScript) — and decide that this is the ideal environment to develop a rich client application.

That’s a bad idea on so many levels.  Remember, it is called the thin client model for a reason.  I’ve seen people who tried to develop database management systemsin JScript and HTML!  That’s a thorough abuse of the thin client model — in the thin client model, the database logic is done on the backend by a dedicated server that can handle it, written by database professionals in hand-tuned C.  JScript was designed for simple scripts on simple web pages, not large-scale software.

Suppose you were going to design a language for thin client development and a language for rich client development.  What kinds of features would you want to have in each?

For the thin client, you’d want a language that had a very simple, straightforward, learn-as-you-go syntax.  The concept count, the number of concepts you need to understand before you start programming, should be low.  The “hello world” program should be something like

print “hello, world!”

and not

import library System.Output;
public startup class MainClass
  public static startup function Main () : void
     System.Output(“hello, world!”);

It should allow novice developers to easily use concepts like variables and functions and loops.  It should have a loose type system that coerces variables to the right types as necessary.  It should be garbage collected.  There must not be a separate compile-and-link step.  The language should support late binding.  The language will typically be used for user interface programming, so it should support event driven programming.  High performance is unimportant — as long as the page doesn’t appear to hang, its fast enough.  It should be very easy to put stuff in global state and access it from all over the program — since the program will likely be small, the lack of locality is relatively unimportant. 

In short, the language should enable rapid development of simple software by relatively unsophisticated programmers through a flexible and dynamic programming model. 

OK, what about the rich-client language?  The language requirements of large-scale software are completely different.  The language must have a rigid type system that catches as many problems as possible before the code is checked in.  There must be a compilation step, so that there is some stage at which you can check for warnings.  It must support modularization, encapsulation, information hiding, abstraction and re-use, so that large teams can work on various interacting components without partying on each other’s implementation details.  The state of the program may involve manipulating scarce and expensive resources — large amounts of memory, kernel objects such as file handles, etc.  Thus the language should allow for fine-grained control over the lifetime of every byte.

Object Oriented Programming in C++ is one language and style that fits this bill, but the concept count of C++ OOP is enormous — pure, virtual, abstract, instance, static, base, pointers, references…  That means that you need sophisticated, highly educated developers.  The processing tasks may be considerable, which means that performance becomes a factor.  Having a complex “hello world” is irrelevant, because no one uses languages like this to write simple programs.

In short, a rich-client language should support large-scale development of complex software by large teams of sophisticated professional programmers through a rigid and statically analyzable programming model.

Complete opposites!  Now, what happens when you try to write a rich client style application using the thin client model? 

Apparent progress will be extremely rapid — we designed JScript for rapid development.  Unfortunately, this rapid development masks serious problemsfestering beneath the surface of apparently working code, problems which will not become apparent until the code is an unperformant mass of bugs. 

Rich client languages like C# force you into a discipline — the standard way to develop in C# is to declare a bunch of classes, decide what their public interfaces shall be, describe their interactions, and implement the private, encapsulated, abstracted implementation details.  That discipline is required if you want your large-scale software to not devolve into an undebuggable mess of global state.  If you can modularize a program, you can design, implement and test it in independent parts.

It is possible to do that in JScript, but the language does not by its very nature lead you to do so.  Rather, it leads you to to favour expedient solutions (call eval!) over well-engineered solutions (use an optimized lookup table).  Everything about JScript was designed to be as dynamic as possible.

Performance is particularly thorny.  Traditional rich-client languages are designed for speed and rigidity.  JScript was designed for comfort and flexibility.  JScript is not fast, and it uses a lot of memory.  Its garbage collector is optimized for hundreds, maybe thousands of outstanding items, not hundreds of thousands or millions.

So what do you do if you’re in the unfortunate position of having a rich client application written in a thin-client language, and you’re running into these issues?

It’s not a good position to be in.

Fixing performance problems after the fact is extremely difficult.  The way to write performant software is to first decide what your performance goals are, and then to MEASURE, MEASURE, MEASURE all the time.  Performance problems on bloated thin clients are usually a result of what I call “frog boiling”.  You throw a frog into a pot of boiling water, it jumps out.  You throw a frog into a pot of cold water and heat it up slowly, and you get frog soup.  That’s what happens — it starts off fine when it is a fifty line prototype, and every day it gets slower and slower and slower… if you don’t measure it every day, you don’t know how bad its getting until it is too late.  The best way to fix performance problems is to never have them in the first place.

Assuming that you’re stuck with it now and you want to make it more usable, what can you do?

  • Data is bad. Manipulate as little data as possible.  That’s what the data tier is for.  If you must manipulate data, keep it simple — use the most basic data structures you can come up with that do the job.
  • Code is worse.  Every time you call eval, performance sucks a little bit more.  Use lookup tables instead of calling eval.  Move code onto the server tier. 
  • Avoid closures.  Don’t nest your functions unless you really understand closure semantics and need them.
  • Do not rely on “tips and tricks” for performance.  People will tell you “declared variables are faster than undeclared variables” and “modulus is slower than bit shift” and all kinds of nonsense.  Ignore them.  That’s like mowing your lawn by going after random blades of grass with nail scissors.  You need to find the WORST thing, and fix it first.  That means measuring.  Get some tools — Visual Studio Analyzer can do some limited script profiling, as can the Numega script profiler, but even just putting some logging into the code that dumps out millisecond timings is a good way to start.  Once you know what the slowest thing is, you can concentrate on modularizing and fixing it.
  • Modularize.  Refactor the code into clean modules with a well-defined interface contract, and test modules independently. 

But the best advice I can give you is simply use the right tool for the right job.  The script languages are fabulous tools for their intended purpose.  So are C# and C++.  But they really are quite awful at doing each other’s jobs!

Comments (15)

  1. Anon

    Good essay, but there are two burrs marring it for me.

    1) You say, ” …but once you bought the software, it was yours.” Well, no, that’s not how I’m seeing things, and I’m shocked that as a Microsoft employee you’re spreading *that* mis-truth. Microsoft is *well known* for taking pains to make it known that no, we don’t own the software we bought. However, it’s not relevant to the main thrust of your essay. But it’s still a burr that needs sanding down.

    2) The essay boils down to, and you said it: “use the right tool for the right job.” While it’s good to conduct an overview of rich vs. thin client models the way you did, this is one of many such rants I’ve seen online, and it’s getting boring. Far better for us to start writing “why are people not choosing the right tools for the right job?” and “With those answers, how are we going to change things to make it better?”

  2. You know, thinking back, I think the last time I made an anonymous post to a public forum, the Amiga 500 was the cutting edge of technology. Any particular reason why you’re unwilling to put your name on your opinions? I promise to not sign you up for those “bill me later” magazine subscriptions.

    > we don’t own the software we bought

    The nuances of intellectual property law and the rights of property holders are fascinating topics which I have no intention of getting into at this time. My point, perhaps poorly expressed, was simply that once you bought the software, obtaining updates was typically a matter of buying more shrink-wrapped boxes. This is in contrast with the web model, where ownership is even more nebulous and updating is trivial.

    > it’s getting boring

    If my frequently updated, highly technical and excessively detailed writing bores you, I invite you to read one of the billion or so other blogs now available on the internet.

    > Far better for us to start writing “why are people not choosing the right tools for the right job?” and “With those answers, how are we going to change things to make it better?”

    Good questions.

    My mission is to throw developers into the Pit Of Quality. Historically, developer tools make developers climb the Hill of Quality — you have to do lots of work to ensure quality software, because the tools can easily work against that goal. Instead, we must make it HARD to write bad software.

    Consider C++ — you want to write a buffer overrun in C++, you can do so in about 30 seconds. Want to write a buffer overrun in C#? You’ve got to WORK IT, baby! Writing buffer overruns in C# is HARD.

    That’s how we make it better — with a combination of user education and tools which lead developers to good practices by their very nature.

    This is an EXTREMELY difficult design problem, and we will never be done. There is _always_ room for improvement in the tools space. (My job is secure!) But we are certainly working on it. In fact, the usability PM across the hall from me is about to embark upon a detailed study of how developers make mistakes using Visual Studio, how they realize they’ve made a mistake, and how they correct it. I’m sure it will be a fascinating study.

  3. Dan Shappir

    Choosing the proper tool for the job is always sound advice, and there is a lot of truth in the comments you make. To the list of distinctions between languages for “thin” vs. “fat” clients I would add behavior in the face of adversity. When encountering an unexpected situation, or even an error, a language like JavaScript should “try to get along” by doing the “right thing” – the spirit of HTML. With application development languages like C++ and C# I would prefer a crash over making any sort of assumptions about my intentions.

    Where I do take a bit of an exception with your post is that it’s very black and white. I believe there are scenarios where the distinctions between client and server can become blurred. Essentially, you can have the client and the server running together on the same computer, even within the same process. I’ll give two examples of applications I’ve designed and developed.

    The first application, which I’ve mentioned here before, implemented its UI using HTML and JavaScript. Underneath it utilized an ActiveX control, pushing in data as an OSP. I think the combination of a COM control doing the heavy lifting beneath an HTML front-end is/was a very powerful one. I remember Microsoft trying to push this model very hard in the 97′ PDC. I never understood why it hasn’t caught on.

    Another project had a stand-alone application that contained the browser control. The UI was generated by applying XSLT to XML in order to generate DHTML powered by JavaScript. The container also exposed methods to the JavaScript through window.external.

    In both cases the combination was very powerful because C++ provided power and stability, and JavaScript ease of development and flexibility. Also, in both cases the customer could modify the UI extensively because the XML/HTML/JavaScript portion was provided in source format.

  4. 6th Attempt to post comment

    >>…what I call “frog boiling”. You throw a frog into a pot of boiling water, it jumps out. You throw a frog into a pot of cold water and heat it up slowly, and you get frog soup. <<

    Another internet myth.

  5. Dan Shappir

    > My mission is to throw developers into the Pit Of Quality. Historically, developer tools make developers climb the Hill of Quality — you have to do lots of work to ensure quality software, because the tools can easily work against that goal. Instead, we must make it HARD to write bad software.

    Consider me a pessimist, but I believe producing quality code will always be an uphill battle. For example, while switching from C++ to C# or Java helps you avoid buffer overruns, you introduce whole new sets of potential problems. Check out this C++ snippet:

    for ( int i = 0 ; i < 1000000 ; ++i ) {
    char buffer[1000];


    Now lets naively translate this to Java:

    for ( int i = 0 ; i < 1000000 ; ++i ) {
    char[] buffer = new char[1000];


    Can you see the problem here? I’ve uncovered this tye of problem several times in production code. And to make matter worse, unless you review or profile the code you may never find this problem.

    Other issues also come to mind, such as determinate finalization (yes I know about the using keyword, but it’s still much easier to do in C++). And if GC makes life so much easier, why did MSDN magazine run a three part series on the ins-and-outs of the .NET GC?

    Don’t get me wrong, I do think C# and .NET are a step in the right direction (unless you are a LISPer or SmallTalk hacker in which case you might consider it a step back 😉 I just thing it’s a never-ending road.

  6. Anu

    I’m still wondering why we have concept like thin and fat clients.

    It would be nice if code was mobile and just migrated to where it could most efficiently execute.

    Ie DBMS access on some server, client redraw on client, other code moves heuristically around depending on the execution circumstances.

  7. Anu, I am sure there are many reasons for this, not least of which is our favourite topic, security. We (as an industry) can’t even get security “right” for well-defined systems, let alone systems in which the threat models change radically depending on where the code is. For example, moving code off the web server and onto the client poses a whole new set of issues (hint: the server can’t trust anything that comes from the client, and vice-versa).

  8. Andy Schriever

    Eric, in my opinion you’ve not looked closely at some of the received wisdom you treat as fact.

    You state, for example, that any serious development language “must have a rigid type system”, and that it “must support compilation”. Though these statements represent conventional wisdom, I’m not certain they stand up to careful and open scrutiny.

    Take the rigid typing issue, for example. I’d suggest that our obsession as an industry with rigid typing is a sort of self-fulfilling prophecy — that the insistence on rigid control over data type is in fact the source of so may challenging type-related bugs. In more than 5 years of use of JavaScript for serious application development, I’ve learned (much to my surprise!) that by eliminating data type as a subject of constant concern, the entire collection of bugs relating to type mismatch disappears. We’ve implemented very systems with very substantial JavaScrpt front ends and have only once suffered a meaningful bug relating to data typing. (And, actually, that one was really a result of JavaScript’s unfortunate overloading of “+”). In our experience, the management of data types is something that can be safely left to the interpreter.

    Similarly, consider your requirement for compilation. The compiled-code environment tends to discourage the kind of careful, incremental build-and-test methodology one can follow in a purely interpreted environment. In the compiled environment, developers tend to build larger chunks of functionality in between bouts of testing, and the size of those chunks rises in proportion to the complexity and delay involved in compilation. That, in turn, leads to more bugs and greater unwillingness to change course when a concept doesn’t work quite as well as expected.

    Your reasonable counter to these arguments is one you address in your article — performance. Interestingly enough, this has never been an issue in any of our applications (though it’s caused a couple of struggles, particularly with IE’s reluctance to render large tables rapidly). After all, how much horsepower is really required to handle most user-side tasks? In the course of our life with JavaScript, the performance of the underlying platform has more than wiped out any advantage we might have gained, had we started in the compiled environment. Looking forward, this still seems to be a reliable expectation.

    As one of the other responders mentioned, my wish is that Microsoft and other vendors would recognize that many of us have taken the HTML+Javascript path for a good reason. The simplicity of Web distribution and the ability to distribute real business applications to any desktop is part of it. The efficiency, simplicity, and downright elegance of JavaScript is another part. Finally, the ability to hand off a while collection of UI management tasks to the best rendering engine that’s ever existed — the modern browser — removes a significant load from the developer and is still another factor. Rather than the .Net model (a truly superb job, by the way) of heavyweight development environments, enormous demand on developers, and cumbersome development and deployment models, why not find a way to add professional tools and environment to HTML + Javascript?

  9. Interesting post, Andy. One thing about typing: you say that it is not a problem as long as the developer is careful not to make assumptions about the type of data. True – and we wouldn’t need garbage collection if developers were careful with their memory allocations, and we wouldn’t need checking on array bounds if developers were careful with their indices and buffer sizes, and so on.

    What we’ve learnt (as an industry) is that developers are only human, which means they make mistakes and are sometimes lazy / careless / in a rush to ship a product out the door. Anything the system can do to catch these problems as early as possible is A Good Thing ™

    See more at:

  10. We can get sillier than that. We wouldn’t need variables if programmers would just keep track of their pointer values… heck, we wouldn’t need programming languages at all if programmers would just talk straight to the chips in binary like they used to…

    Writing software in the large is all about _managing complexity_, and static type checking is a great way to manage that complexity. My point is simply that if you have a lot of complexity to manage, use the right tools.

  11. This post is right up my alley. For 3 years I worked on a prototype client runtime system built around HTML, JavaScript and XML (yes for its time it was very buzzword compliant!) called Sash. We always thought that blurring the distinction between thin client web apps and fat client desktop apps was the sweet spot for developers to be in and our entire platform was built around that idea. We were able to realize significant development time returns because most of the heavy lifting was done by COM objects hosted in a consistent manner with a very granular security model. We built all manner of very rich client app all on top of JavaScript and HTML, so I think that some of the distinctions you make in your post, while valid, don’t take into account the fact that most developers who are using such a system are disciplined enough to practice good modularization and make the language and runtime work for them. I guess my point is that it really comes down to developer discipline and not the language model when you are trying to design any app (thin, rich, fat, bloated…whatever). As ana aside I think the term we used for our model was muscle client 😉

  12. Craig

    I have also been involved in creating a “rich client” for IE using JavaScript, DHTML and XML. This product has been shipping for almost 4 years and is very stable. Yes, we have had to deal with those nasty performance issues, but they seem no worse than in other environments (except for the lack of tools). Sounds programming saves yourself in any environment.

    The one issue that I want to bring up is that, in your history overview, you jump right from a discussion of thin-clients to your vision of a rich client using a technology that doesn’t exist yet. What about those products that had to be built in the in-between time? From say, 1998-2005?

    We cannot wait to build the products we want to build until Longhorn ships. Our product was started in late 1999 when there were really just a few options, Java, ActiveX or JavaScript and DHTML. I stand by our decision to use these technologies, they were the best out there (and still are).

    Now, this does not mean that I am not looking forward to moving to .NET and newer technologies if/when they ship. 😉

  13. Jon Innes

    Interesting article Eric. I’ve been involved with the development of rich GUIs built using web technology for some time–including one of the leading CRM packages available today. I agree that JavaScript and DHTML are not the best tools for rich client problems. But unfortunately the hype around web-apps has led to the fact that most enterprise application software (Oracle, Siebel, PeopleSoft…) sold today have web-based UIs.

    There are several reasons for this. First, easy distribution of updates of web apps addressed a need that wasn’t being addressed sufficiently by Java or .Net. Until the .Net runtime is as pervasive as IE on end-user desktops, IT groups in big organizations will prefer web-based apps for this reason alone. Another factor is that web-based UIs are, as you point out, fast and easy to develop. This has effects that are lost on companies that made their money in the shrink-wrap past. Web UIs can be prototyped quickly (independent of the server-side code), making it easy to get user and customer feedback. Web-apps are also easier to integrate with the web-based intranets found in today’s companies. IT staff are typically more comfortable customizing web-based apps, something most businesses do with enterprise application software that they don’t do with Word or Excel.

    HTML and JavaScript skills are more common than VB, C#, or C++ in IT departments and for that matter in most software companies today. Unfortunately, software developers and IT staff tend to pick the language for a project based on expertise, not suitability. That will probably stay that way until languages and tools become so easy to learn that programmers stop linking their professional careers with them. I look forward to that day. When that happens, people in our profession will identify more with the domain they work in (e.g., healthcare) rather than the tools/technology (C#, Jscript) they use.

  14. Andrey Skvortsov

    Longhorn XAML is simple expandable browser model(.NET powered=hosting/creation .NET objects as first class BOM elements/objects,.NET+DOM infoset mix), so what you are talking about?

    More powerful hosting environment(smart client anyone?)-more powerful client/server/…

    And any developer(experienced in particular) will choose most easy/flexible way to do the job-always,dosen’t matter that’s not right in someone opinion.

    expando properties/attached behaviors-I still miss about it in Longhorn(even if this functionality is present-it’s implementation is not transparent enough(Dependency properties etc.) compared to IE+JavaScript and THIS is BAD.

  15. I love this topic and agree with many of the posts here. I like to consider myself a developer who – before implementing any architecture/technology – understands it.

    I’ve recently been developing an IE/Javascript/DHTML/XML framework that takes advantage of the MS XMLHTTPRequest object and Inner-Browsing. In the process I’ve learned quite a lot about JS and rather like it. I had previously thought it was a toy language – but its OO abilities, among others things, are rather kick-butt.

    On the surface – a number of things are GOOD:

    1. fast development cycle.

    2. The ability to create a rich UI via DHTML (I spent over 9 years doing DOS, Win3.1, Win95, Win2K multi-media apps and can recite almost every GDI API there is. The amount processing going on to change the class for a mouseover event amazes me…)

    3. The ability to maintain state on the client via XML data islands

    4. Synchronous posts/gets via the XMLHTTPRequest object.

    5. Few to no ‘back-button’ issues

    6. The server code gets simpler and thus more scalable because it essentially just authenticates, saves and fetches XML.

    7. No more complicated code saving state at the server and re-creating the entire page. (Less bugs)

    So what’s wrong ? Well, if JS has the memory leaks I read about – then what is the point ? Why aren’t tools available to help pin-point when a leak occurs ? Why aren’t there good IDEs for JS debugging ? I’d like to know if this is a viable frame-work before preceding…

    Anyone develop an industrial app like this ?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s