New grad vs senior dev

A student who I used to tutor in CS occasionally sent me a meme yesterday which showed “NEW GRAD vs SENIOR DEVELOPER”; the new grad is all caps yelling

NO! YOU CAN’T JUST USE BRUTE FORCE HERE! WE NEED TO USE SEGMENT TREES TO GET UPDATE TIME COMPLEXITY DOWN TO O(LOG N)! BREAK THE DATA INTO CHUNKS AT LEAST! OH THE INEFFICIENCY!!!

and the senior developer responds

Ha ha, nested for loops go brrrrrrrrrrrr…

OK, that’s silly and juvenile, but… oh no, I feel a flashback coming on.

It is 1994 and I am a second-year CS student at my first internship at Microsoft on the Visual Basic compiler team, reading the source code for InStr for the first time. InStr is the function in Visual Basic that takes two strings, call them source and query, and tells you the index at which query first appears as a substring of source, and the implementation is naive-brute-force.

I am shocked to learn this! Shocked, I tell you!

Let me digress slightly here and say what the naive brute force algorithm is for this problem.


Aside: To keep it simple we’ll ignore all the difficulties inherent in this problem entailed by the fact that VB was the first Microsoft product where one version worked everywhere in the world on every version of Windows no matter how Windows was localized; systems that used Chinese DBCS character encodings ran the same VB binary as systems that used European code pages, and we had to support all these encodings plus Unicode UTF-16. As you might imagine, the string code was a bit of a mess. (And cleaning it up in VBScript was one of my first jobs as an FTE in 1996!)

Today for simplicity we’ll just assume we have a flat, zero-terminated array of chars, one character per char as was originally intended.


The extremely naive algorithm for finding a string in another goes something like this pseudo-C algorithm:

bool starts(char *source, char *query)
{
  int i = 0;
  while (query[i] != '\0')
  {
    if (source[i] != query[i])
      return false;
    i = i + 1;
  }
  return true;
}
int find(char *source, char *query)
{
  int i = 0;
  while (source[i] != '\0')
  {
    if (starts(source + i, query))
      return i;
    i = i + 1;
  }
  return -1;  
}

The attentive reader will note that this is the aforementioned nested for loop; I’ve just extracted the nested loop into its own helper method. The extremely attentive reader will have already noticed that I wrote a few bugs into the algorithm above; what are they?

Of course there are many nano-optimizations one can perform on this algorithm if you know a few C tips and tricks; again, we’ll ignore those. It’s the algorithmic complexity I’m interested in here.

The action of the algorithm is straightforward. If we want to know if query “banana” is inside source “apple banana orange” then we ask:

  • does “apple banana orange” start with “banana”? No.
  • does “pple banana orange” start with “banana”? No.
  • does “ple banana orange” start with “banana”? No.
  • does “banana orange” start with “banana”? Yes! We’re done.

It might not be clear why the naive algorithm is bad. The key is to think about what the worst case is. The worst case would have to be one where there is no match, because that means we have to check the most possible substrings. Of the no-match cases, what are the worst ones? The ones where starts does the most work to return false.  For example, suppose source is “aaaaaaaaaaaaaaaaaaaa” — twenty characters — and query is “aaaab”. What does the naive algorithm do?

  • Does “aaaaaaaaaaaaaaaaaaaa” start with “aaaab”? No, but it takes five comparisons to determine that.
  • Does “aaaaaaaaaaaaaaaaaaa” start with “aaaab”? No, but it takes five comparisons to determine that.
  • … and so on.

In the majority of attempts it takes us the maximum number of comparisons to determine that the source substring does not start with the query. The naive algorithm’s worst case is O(n*m) where n is the length of source and m is the length of the query.

There are a lot of obvious ways to make minor improvements to the extremely naive version above, and in fact the implementation in VB was slightly better. The implementation in VB was basically this:

char* skipto(char *source, char c)
{
  char *result = source;
  while (*result != '\0' && *result != c)
    result = result + 1;
  return result;
}
int find(char *source, char *query)
{
  char *current = skipto(source, query[0]);
  while (*current != '\0;)
  {
    if (starts(current, query))
      return current - source;
    current = skipto(current + 1, query[0]);
  }
  return -1;
}

(WOW, EVEN MORE BUGS! Can you spot them? It’s maybe easier this time.)

This is more complicated but not actually better algorithmically; all we’ve done is moved the initial check in starts that checks for equality of the first letters into its own helper method. In fact, what the heck, this code looks worse. It does more work and is more complicated. What’s going on here? We’ll come back to this.

As I said, I was a second year CS student and (no surprise) a bit of a keener; I had read ahead and knew that there were string finding algorithms that are considerably better than O(n*m). The basic strategy of these better algorithms is to do some preprocessing of the strings to look for interesting features that allow you to “skip over” regions of the source string that you know cannot possibly contain the query string.

This is a heavily-studied problem because, first off, obviously it is a “foundational” problem; finding substrings is useful in many other algorithms, and second, because we genuinely do have extremely difficult problems to solve in this space. “Find this DNA fragment inside this genome”, for example, involves strings that may be billions of characters long with lots of partial matches.

I’m not going to go into the various different algorithms that are available to solve this problem and their many pros and cons; you can read about them on Wikipedia if you’re interested.

Anyways, where was I, oh yes, CS student summer intern vs Senior Developer.

I read this code and was outraged that it was not the most asymptotically efficient possible code, so I got a meeting with Tim Paterson, who had written much of the string library and had the office next to me.

Let me repeat that for those youngsters in the audience here, TIM FREAKIN’ PATERSON. Tim “QDOS” Paterson, who one fine day wrote an operating system, sold it to BillG, and that became MS-DOS, the most popular operating system in the world. As I’ve mentioned before, Tim was very intimidating to young me and did not always suffer foolish questions gladly, but it turned out that in this case he was very patient with all-caps THIS IS INEFFICIENT Eric. More patient than I likely deserved.

As Tim explained to me, first off, the reason why VB does this seemingly bizarre “find the first character match, then check if query is a prefix of source” logic is because the skipto method is not written in the naive fashion that I showed here. The skipto method is a single x86 machine instruction. (REPNE SCASB, maybe? My x86 machine code knowledge was never very good. It was something in the REP family at least.) It is blazingly fast. It harnesses the power of purpose-built hardware to solve the problem of “where’s that first character at?”

That explains that; it genuinely is a big perf win to let the hardware do the heavy lifting here. But what about the asymptotic problem? Well, as Tim patiently explained to me, guess what? Most VB developers are NOT asking if “aaaab” can be found in “aaaaaaa…”. The vast majority of VB developers are asking is “London” anywhere in this address, or similar problems where the strings are normal human-language strings without a lot of repetitions, and both the source and query strings are short.  Like, very short. Less than 100 characters short. Fits into a cache line short.

Think about it this way; most source strings that VB developers are searching have any given character in them maybe 2% of the time, and so for whatever the start character is of the query string, the skipto step is going to find those 2% of possible matches very quickly. And then the starts step is the vast majority of the time going to very quickly identify false matches. In practice the naive brute force algorithm is almost always O(n + m). 

Moreover, Tim explained to me, any solution that involves allocating a table, preprocessing strings, and so on, is going to take longer to do all that stuff than the blazingly-fast-99.9999%-of-the-time brute force algorithm takes to just give you the answer. The additional complexity is simply not worth it in scenarios that are relevant to VB developers. VB developers are developing line-of-business solutions, and their line of business is not typically genomics; if it is, they have special-purpose libraries for those problems; they’re not using InStr.

And we’re back in 2020. I hope you enjoyed that trip down memory lane.

It turns out that yes, fresh grads and keener interns do complain to senior developers about asymptotic efficiency, and senior developers do say “but nested for loops go brrrrrrr” — yes, they go brrrrrr extremely quickly much of the time, and senior developers know that!

And now I am the senior developer, and I try to be patient with the fresh grads as my mentors were patient with me.


UPDATE: Welcome, Hacker News readers. I always know when I’m linked from Hacker News because of the huge but short-lived spike in traffic. The original author of the meme that inspired this post has weighed in. Thanks for inspiring this trip back to a simpler time!

Bandits, victims and idiots

I don’t enjoy politics, I don’t know enough about it, and my privilege greatly insulates me from its negative effects, and so I don’t talk about it much on this blog. My intention in creating the blog lo these decades ago was to make a friendly, human, competent public face to my team at Microsoft, and to get information out about languages that was not in the official documentation; it was not ever intended to be a soapbox. Only a couple times have I commented on political situations, and today will be the third apparently.

I have been thinking many times these last four years, and much more these last few days about the late Italian economic historian Carlo Cipolla. Not because of his economic theories, of which I know very little, but rather because of his theory of stupidity. You can read the principles in brief for yourself at the link above, or the original paper here, but I can summarize thus:

  • Powerful smart people take actions that benefit both themselves and others.
  • Victims lack the power to protect themselves. They are unable to find actions that benefit themselves, and are victimized to the benefit of others.
  • Bandits take actions that benefit themselves at the expense of victims.
  • Stupid idiots take actions that benefit neither themselves nor others.

These are value-laden terms so let’s be clear here that neither I nor Cipolla are suggesting that victims, bandits or idiots are not intelligent:

  • No matter how intelligent you are and how many precautions you take, you can be victimized by a bandit or an idiot. Victims are not to blame for their victimization. We’ll come back to this in a moment.
  • Bandits are often very intelligent; they just use their skills to victimize others. Whether that’s because they are genuinely not intelligent enough to make a living helping others, or because they are that intelligent but psychologically enjoy being a bandit, or are bandits for other reasons, it doesn’t matter for our purposes. Assume that bandits are extremely intelligent and devious, but motivated by gain.
  • Idiots, ironically, are often very intelligent; a great many idiots have fancy degrees from excellent colleges. As Cipolla points out in his paper, there is no characteristic that identifies idiots other than their inability to act in a way that benefits anyone including themselves. That includes intelligence or lack thereof.

Some key consequences of this model have been on my mind these last few days:

  • Bandits, even the psychopaths, are motivated by self-interest and recognize actions that benefit themselves. You can reason with a bandit, but more importantly, you can reason about a bandit, and therefore you can make use of a bandit. You can make an offer to a powerful bandit and count on them to take it up if it maximizes their gain.
  • You cannot reason with an idiot. You can’t negotiate with them to anyone’s advantage because they will take positions that harm themselves at the same time as they harm others. There are no “useful idiots”; any attempt to use an idiot to benefit yourself will backfire horribly as they manage to find a way for everyone to lose.
  • When the idiots are in power, there is no bright line separating the smart from the victims; rather, there is just a spectrum of more or less power and privilege. Victims by definition lack the power to defend themselves, and the more privileged have no lever to pull to change the course of the idiot, who will act with such brazen disregard for the well-being of everyone including themself that it is hard to devise a strategy.

All this is by way of introduction to say: the position that I am seeing on Twitter and in the media that “soon” is a good time to “re-start the economy” is without question the stupidest, most idiotic position I have ever heard of in my life and that includes “let’s invade Afghanistan for no strategic purpose with no plan on how to ever leave”. There is no way that ends well for anyone, and that includes the billionaires who are temporarily inconvenienced by a slight dip in the flow of cash into their coffers.

I’ll leave you with how Cipolla finishes his essay, because it sums up exactly how I feel at this moment in history.

In a country which is moving downhill […] one notices among those in power an alarming proliferation of the bandits with overtones of stupidity and among those not in power an equally alarming growth in the number of helpless individuals. Such change in the composition of the non-stupid population inevitably strengthens the destructive power of the stupid fraction and makes decline a certainty. And the country goes to Hell.

Working from home

Good Friday afternoon all and welcome to this working-from-home-and-obsessively-washing-hands edition of FAIC.

I am posting today from my recently-transformed spare room which is now apparently my office. Scott Hanselman started a great twitter thread of techies showing off their home workspaces; here’s my humble contribution.

20200320_113545.jpg

We have my work Mac hooked up to two medium-sized HP monitors, one of which cost me all of $20 at a tech thrift store. The Windows game machine is under the desk. You’ll note that I finally found a use for my VSTO 2007 book. The keyboard is the new edition of the Microsoft Natural; my original edition Natural is still on my desk at work and is not currently retrievable.

I am particularly pleased with how the desk came out. I made it myself out of 110 year old cedar fence boards; when I bought my house in 1997 the original fence was still in the back yard and falling down, so I disassembled it, removed the nails, let the boards dry out, planed them down, and figured I’d eventually do something with it. I’ve been building stuff out of it ever since, and this project finished off the last of that stock.

Here’s a better shot of the desk.

IMG_9633

The design is my own but obviously it is just a simple mission-style desk. All the joints are dowel and glue; the only metal is the two screws that hold the two drawer knobs on. The finish is just Danish oil with a little extra linseed oil added.

To the right I have a small writing desk:

20200320_123025.jpg

Which as you may have guessed doubles as my 1954 Kenmore Zigzag Automatic Sewing Machine:

20200320_113944.jpg

I have not used it in a while; I used to make kites. I might start again.

The manual for this machine is unintentionally hilarious, but that’s a good topic for another day.

Finally, not shown, I’ve got a futon couch and a few plants to make it cosy.

Stay safe everyone, and hunker down.


UPDATE ONE: Obviously I’ve been spending so much time in video chat from home, which is very unusual for me. Unfortunately my setup is such that there is a south-facing window right behind me that overpowers the built-in camera on my laptop even with the curtains drawn.

I took Scott’s advice and got an inexpensive 8 inch ring light, shown here with the room otherwise dark:

91172033_673291623212589_6550987911485980672_n.jpg

The ring light is dimmable LED and has three colour temperatures, so I can now control the specularity of the light directly on my face, and also do some light shaping with the little non-dimmable desk lamp should I wish to. On a bright day the window no longer washes out the webcam image.

Before:

Photo on 3-31-20 at 1.48 PM.jpg

After:

Photo on 3-31-20 at 1.49 PM.jpg

The window is still an almost total white-out, but at least I no longer look like a purple-faced Walking Dead extra.

And good heavens do I ever need a haircut. That’ll have to wait.


UPDATE TWO: You cannot tell in the photo above but the ring light was held on to the monitor by a jury-rigged spring clamp, which is none of stable, attractive, or good for the laptop it was clamped to. In the spirit of Adam Savage’s One Day Builds, I give you Eric Lippert’s Five Minute Build:

20200505_105604.jpg20200505_105947.jpg

Yes, it is a board (left over from the desk) with three holes in it. Quarter-twenty bolt in the back, wood screws in the front, done in five minutes.

 

Passing awaited tasks

Here’s an interesting question I saw on StackOverflow recently; it was interesting because the answer seems obvious at first, but making a small change to the question makes the answer very different.

The original question was: suppose we have an asynchronous workflow where we need to get an integer to pass to another method. Which of these is, if any, is the better way to express that workflow?

Task<int> ftask = FAsync();
int f = await ftask;
M(f);

or

int f = await FAsync();
M(f);

or

M(await FAsync());

?


The answer of course is that all of these are the same workflow; they differ only in the verbosity of the code. You might argue that when debugging the code it is easier to debug if you have one operation per line. Or you might argue that efficient use vertical screen space is important for readability and so the last version is better. There’s not a clear best practice here, so do whatever you think works well for your application.

(If it is not clear to you that these are all the same workflow, remember that “await” does not magically make a synchronous operation into an asynchronous one, any more than “if(M())” makes M() a “conditional operation”. The await operator is just that: an operator that operates on values; the value returned by a method call is a value like any other! I’ll say more about the true meaning of await at the end of this episode.)

But now suppose we make a small change to the problem. What if instead we have:

M(await FAsync(), await GAsync());

?  This workflow is equivalent to:

Task<int> ftask = FAsync();
int f = await ftask;
Task<int> gtask = GAsync();
int g = await gtask;
M(f, g);

but that causes the start of the GAsync task to be delayed until after the FAsync task finishes! If the execution of GAsync does not depend on the completion of FAsync then we would be better off writing:

Task<int> ftask = FAsync();
Task<int> gtask = GAsync();
int f = await ftask;
int g = await gtask;
M(f, g);

Or equivalently

Task<int> ftask = FAsync();
Task<int> gtask = GAsync();
M(await ftask, await gtask);

and possibly get some additional efficiency in our workflow; if FAsync is for some reason delayed then we can still work on GAsync’s workflow.

Always remember when designing asynchronous workflows: an await is by definition a position in the workflow where the workflow pauses (asynchronously!) until a task completes. If it is possible to delay those pauses until later in the workflow, you can sometimes gain very real efficiencies!