keep talking about script performance without ever actually giving my rant about why
most of the questions I get about performance are pointless at best, and usually downright
me give you an example of the kind of question I’ve gotten dozens of times over the
last seven years. Here’s one from the
We have some VBScript code that
DIMs a number of variables in a well-used function. The code never actually
uses those variables and they go out of scope without ever being touched. Are
we paying a hidden price with each call?
an interesting performance question! In
a language like C, declaring n bytes total of local variables just results in the
compiler generating an instruction that moves the stack pointer n bytes. Making
n a little larger or smaller doesn’t change the cost of that instruction. Is
VBScript the same way? Surprisingly,
no! Here’s my analysis:
dim it, you get it. VBScript has no idea whether you’re going to do this or
order to enable this scenario the script engine must at
all of the names of
the local variables into a local binder. That
causes an added per-variable-per-call expense.
that JScript .NET does attempt to detect
this scenario and optimize it, but that’s another post.)
what is the added expense? I happened
to have my machine set up for perf measuring that day, so I measured it:
my machine, every additional
variable which is dimensioned but not used adds a 50
nanosecond penalty to every call
of the function. The effect appears to scale linearly with the number of unused
dimensioned variables; I did not test scenarios with extremely large numbers of unused
variables, as these are not realistic scenarios. Note also that I did not test
very long variable names; though VBScript limits variable names to 256 characters,
there may well be an additional cost imposed by long variable names.
machine is a 927 MHz Pentium III, so that’s somewhere around fifty processor cycles
each. I do not have VTUNE installed right now, so I can’t give you an exact
processor cycle count.
means that if your heavily used function has, say, five unused variables then every
four million calls to your function will slow your program down by an entire second,
assuming of course that the target machine is my high-end dev machine. Obviously
a slower machine may exhibit considerably worse performance.
you do not mention whether you are doing this on a server or a client. That
is extremely important when doing performance analysis!
the penalty is imposed due to a heap allocation, the penalty on the server may scale
differently based on the heap usage of other threads running in the server.
There may be contention issues – my measurements measured only “straight” processor
cost; a full analysis of the cost for, say, an 8 proc heavily loaded server doing
lots of small-string allocations may well give completely different results.
let me take this opportunity to tell you that all
the analysis I just described is almost completely worthless because it obscures a
larger problem. There’s an elephant
in the room that we’re ignoring. The
fact that a user is asking me about performance of VBScript tells me that either
this user is a hard-core language wonk who wants to talk shop, or, more likely,
user has a program written in script which he would like to be running faster. The
user cares deeply about the performance of his program.
we see why this perf analysis is worthless. If
the user cares so much about performance then why is he using a late-bound, unoptimized,
bytecode-interpreted, weakly-typed, extremely dynamic language specifically designed
for rapid development at the expense of runtime performance?
you want a script to be faster then there are way more important things to be optimizing
away than the 50-nanosecond items. The
key to effective performance tuning is finding the most expensive thing and starting
with that. A single call that
uses an undimensioned variable, for example, is hundreds of times more expensive than
that dimensioned-but-unused variable. A single call to a host object model method
is thousands of times more expensive. Optimizing a script by trimming the 50 ns costs
is like weeding your lawn by cutting the already-short grass with nail scissors and
ignoring the weeds. It
takes a long time, and it makes no noticeable impact on the appearance of your lawn. It
epitomizes the difference between “active” and “productive”. Don’t do that!
even better advice that that would be to throw
away the entire script and start over in C if performance is so important.
let me just take this opportunity to interrupt myself and say that yes, script
performance is important. We spent
a lot of time optimizing the script engines to be pretty darn fast for late-bound,
unoptimzed, bytecode-interpreted, weakly-typed dynamic language engines. Eventually
you come up against the fact that you have to pick the right tool for the job — VBScript
is as fast as its going to get without turning it into a very different language or
reimplementing it completely.
this second analysis is hardly better than the first, because again, there is an elephant
in the room. There’s a vital piece of
data which has not been supplied, and that is the key to all perf analysis:
Bad Is Good Enough?
was going easy on myself — I actually consider this sort of “armchair” perf analysis
to be not worthless, I consider it to
be actively harmful.
read articles about the script engines that say things like “you should use And
determine whether a number is even rather than Mod
the chip executes the And instruction
faster”, as though VBScript compiled down to tightly optimized machine code.
People who base their choice of operator on utterly nonsensical rationales are
not going to write code that is maintainable
or robust. Those programs end up broken,
and “broken” is the ultimate in bad performance, no matter how fast the incorrect
you want to write fast code — in script or not — then ignore every article
you ever see on “tips and tricks” that tell you which operators are faster and what
the cost of dimensioning a variable is. Writing
fast code does not require a collection of cheap tricks, it requires analysis of user
scenarios to set goals, followed by a rigorous program of careful measurements and
small changes until the goals are reached.
Have a user-focussed plan. Know what your performance goals are.
Know what to measure to test whether those goals are met. Are you worried about
throughput? Time to first byte? Time to last byte? Scalability?
Measure the whole system, not just isolated parts.
Measure carefully and measure often.
what the MSN people do, and they know about scalable web sites.
know that’s not what people want to hear. People have these ideas about performance
analysis which as far as I can tell, last applied to PDP-11’s. Script running
on web servers cannot be optimized through micro-optimization of individual lines
of code — it’s not C, where you can know the exact cost of every statement.
With script you’ve got to look at the whole thing and attack the most expensive things.
Otherwise you end up doing a huge amount of work for zero noticable gain.
you’ve got to know what your goals are. Figure
out what is important to your users. Applications
with user interfaces have to be snappy — the core processing can take five minutes
or an hour, but a button press must result in a UI change in under .2 seconds to not
feel broken. Scalable web applications
have to be blindingly fast — the difference between 25 ms and 50 ms is 20 pages a
second. But what’s the user’s bandwidth? Getting
the 10kb page generated 25 ms faster will make little difference to the guy with the
14000 bps modem.
you know what your goals are, measure where you’re at. You’d
be amazed at the number of people who come to me asking for help in making there things
faster who cannot tell me how they’ll know
when they’re done. If you don’t know
what’s fast enough, you could work it forever.
if it does turn out that you need to stick with a scripting solution, and the script
is the right thing to make faster, look for the big stuff. Remember,
script is glue. The vast majority of the time spent in a typical page
is in either the objects called by the script, or in the Invoke code setting
up the call to the objects. If you had to have one rule of scripting performance,
it’s this: manipulating data is
really bad, and code is even worse. Don’t worry about the Dims, worry about
the calls. Every call to a COM object that you eliminate is worth tens of thousands
don’t forget also that RIGHT is better than FAST. Write
the code to be extremely straightforward. Code that makes sense is code which can
be analyzed and maintained, and that makes it performant. Consider
our “unused Dim”
example — the fact that an unused Dim has
a 50 ns cost is irrelevant. It’s an unused
variable. It’s worthless code. It’s
a distraction to maintenance programmers. That’s
the real performance cost — it makes
it harder for the devs doing the perf analysis to do their jobs well!
You must be logged in to post a comment.
- MartinJ[Quote]Getting the 10kb page generated 25 ms faster will make little difference to the guy with the 14000 bps modem.[/Quote]Log in to Reply
- It might be better if you found out how to turn that 10kb page into 6 or 7. That would make many people a lot happier: faster download, more pages/minute, less bandwidth usage.
- October 17, 2003 at 2:47 pm
- DeadprogrammerI’ve seen whole sites written without a single <%=%> because it’s “slower” than response.write. I mean all of the HTML text would be put in strings and then concatenated with variables within a single <%%>. A lot of HTML text.
- Log in to Reply
- October 17, 2003 at 2:51 pm
- Peter TorrOne general rule of thumb for slow client-side script is to get rid of the dots. As Eric says, dispatching OM calls is by far the slowest thing the average page does, so you should cache your “nested” objects so you can use “mybutton.name” rather than “window.document.forms.mybutton.name” in a tight loop.
- Log in to Reply
- October 17, 2003 at 4:05 pm
- Bob RiemersmaWell, this blog entry is a little moldy at this point, but here’s another opinion:Shorter script, ran a heck of a lot quicker. Go figure.Log in to Reply
- With all of the scriptable power tools lying around I am amazed at the heavy use of scripting “hand tools” followed by pleas for performance improvements.
- Whenever possible I believe script ought to be used in it’s (original?) “glue” capacity. I recently saw somebody beg “I have these two CSV files and I want to combine them into one, gimme the code.” One answer given involved FSO I/O, a bunch of Split( )s, a Dictionary object, Join( )s, blah, blah. Basically the guy wanted to do a simple SQL INNER JOIN, so I pointed him to ADO and Jet and the Text Driver – using a Jet SQL INNER JOIN.
- March 3, 2004 at 9:23 pm
- Coding HorrorI don’t usually get territorial about modifications to “my” code. First of all, it’s our code. And if you want to change something, be my guest; that’s why God invented source control. But, for the love of all that’s…
- Log in to Reply
- January 11, 2005 at 10:19 pm
- Patrick’s Blog » Blog Archive » Client Variables: Only a Sith Deals in AbsolutesPingBack from http://pmcelhaney.weblogs.us/archives/client-variables
- Log in to Reply
- July 18, 2006 at 8:45 am
- DatagodI loved the article, even if it was a few years old. (just kidding…I personally detest lazy coding habits)
- Log in to Reply
- That being said, I knocked that 50 nanosecond process down to about 5 by simply allowing technology to catch up to the code.
- June 14, 2007 at 11:47 am
- Fabulous Adventures In Coding(Sorry about the title. I work for Microsoft; we like nouns .) Over a year ago now a reader noted in
- Log in to Reply
- September 7, 2007 at 2:13 pm
- Fabulous Adventures In CodingAs I have said before many times , there is only one sensible way to make a performant application. (As
- Log in to Reply
- February 6, 2009 at 1:19 pm