New grad vs senior dev

A student who I used to tutor in CS occasionally sent me a meme yesterday which showed “NEW GRAD vs SENIOR DEVELOPER”; the new grad is all caps yelling

NO! YOU CAN’T JUST USE BRUTE FORCE HERE! WE NEED TO USE SEGMENT TREES TO GET UPDATE TIME COMPLEXITY DOWN TO O(LOG N)! BREAK THE DATA INTO CHUNKS AT LEAST! OH THE INEFFICIENCY!!!

and the senior developer responds

Ha ha, nested for loops go brrrrrrrrrrrr…

OK, that’s silly and juvenile, but… oh no, I feel a flashback coming on.

It is 1994 and I am a second-year CS student at my first internship at Microsoft on the Visual Basic compiler team, reading the source code for InStr for the first time. InStr is the function in Visual Basic that takes two strings, call them source and query, and tells you the index at which query first appears as a substring of source, and the implementation is naive-brute-force.

I am shocked to learn this! Shocked, I tell you!

Let me digress slightly here and say what the naive brute force algorithm is for this problem.


Aside: To keep it simple we’ll ignore all the difficulties inherent in this problem entailed by the fact that VB was the first Microsoft product where one version worked everywhere in the world on every version of Windows no matter how Windows was localized; systems that used Chinese DBCS character encodings ran the same VB binary as systems that used European code pages, and we had to support all these encodings plus Unicode UTF-16. As you might imagine, the string code was a bit of a mess. (And cleaning it up in VBScript was one of my first jobs as an FTE in 1996!)

Today for simplicity we’ll just assume we have a flat, zero-terminated array of chars, one character per char as was originally intended.


The extremely naive algorithm for finding a string in another goes something like this pseudo-C algorithm:

bool starts(char *source, char *query)
{
  int i = 0;
  while (query[i] != '\0')
  {
    if (source[i] != query[i])
      return false;
    i = i + 1;
  }
  return true;
}
int find(char *source, char *query)
{
  int i = 0;
  while (source[i] != '\0')
  {
    if (starts(source + i, query))
      return i;
    i = i + 1;
  }
  return -1;  
}

The attentive reader will note that this is the aforementioned nested for loop; I’ve just extracted the nested loop into its own helper method. The extremely attentive reader will have already noticed that I wrote a few bugs into the algorithm above; what are they?

Of course there are many nano-optimizations one can perform on this algorithm if you know a few C tips and tricks; again, we’ll ignore those. It’s the algorithmic complexity I’m interested in here.

The action of the algorithm is straightforward. If we want to know if query “banana” is inside source “apple banana orange” then we ask:

  • does “apple banana orange” start with “banana”? No.
  • does “pple banana orange” start with “banana”? No.
  • does “ple banana orange” start with “banana”? No.
  • does “banana orange” start with “banana”? Yes! We’re done.

It might not be clear why the naive algorithm is bad. The key is to think about what the worst case is. The worst case would have to be one where there is no match, because that means we have to check the most possible substrings. Of the no-match cases, what are the worst ones? The ones where starts does the most work to return false.  For example, suppose source is “aaaaaaaaaaaaaaaaaaaa” — twenty characters — and query is “aaaab”. What does the naive algorithm do?

  • Does “aaaaaaaaaaaaaaaaaaaa” start with “aaaab”? No, but it takes five comparisons to determine that.
  • Does “aaaaaaaaaaaaaaaaaaa” start with “aaaab”? No, but it takes five comparisons to determine that.
  • … and so on.

In the majority of attempts it takes us the maximum number of comparisons to determine that the source substring does not start with the query. The naive algorithm’s worst case is O(n*m) where n is the length of source and m is the length of the query.

There are a lot of obvious ways to make minor improvements to the extremely naive version above, and in fact the implementation in VB was slightly better. The implementation in VB was basically this:

char* skipto(char *source, char c)
{
  char *result = source;
  while (*result != '\0' && *result != c)
    result = result + 1;
  return result;
}
int find(char *source, char *query)
{
  char *current = skipto(source, query[0]);
  while (*current != '\0;)
  {
    if (starts(current, query))
      return current - source;
    current = skipto(current + 1, query[0]);
  }
  return -1;
}

(WOW, EVEN MORE BUGS! Can you spot them? It’s maybe easier this time.)

This is more complicated but not actually better algorithmically; all we’ve done is moved the initial check in starts that checks for equality of the first letters into its own helper method. In fact, what the heck, this code looks worse. It does more work and is more complicated. What’s going on here? We’ll come back to this.

As I said, I was a second year CS student and (no surprise) a bit of a keener; I had read ahead and knew that there were string finding algorithms that are considerably better than O(n*m). The basic strategy of these better algorithms is to do some preprocessing of the strings to look for interesting features that allow you to “skip over” regions of the source string that you know cannot possibly contain the query string.

This is a heavily-studied problem because, first off, obviously it is a “foundational” problem; finding substrings is useful in many other algorithms, and second, because we genuinely do have extremely difficult problems to solve in this space. “Find this DNA fragment inside this genome”, for example, involves strings that may be billions of characters long with lots of partial matches.

I’m not going to go into the various different algorithms that are available to solve this problem and their many pros and cons; you can read about them on Wikipedia if you’re interested.

Anyways, where was I, oh yes, CS student summer intern vs Senior Developer.

I read this code and was outraged that it was not the most asymptotically efficient possible code, so I got a meeting with Tim Paterson, who had written much of the string library and had the office next to me.

Let me repeat that for those youngsters in the audience here, TIM FREAKIN’ PATERSON. Tim “QDOS” Paterson, who one fine day wrote an operating system, sold it to BillG, and that became MS-DOS, the most popular operating system in the world. As I’ve mentioned before, Tim was very intimidating to young me and did not always suffer foolish questions gladly, but it turned out that in this case he was very patient with all-caps THIS IS INEFFICIENT Eric. More patient than I likely deserved.

As Tim explained to me, first off, the reason why VB does this seemingly bizarre “find the first character match, then check if query is a prefix of source” logic is because the skipto method is not written in the naive fashion that I showed here. The skipto method is a single x86 machine instruction. (REPNE SCASB, maybe? My x86 machine code knowledge was never very good. It was something in the REP family at least.) It is blazingly fast. It harnesses the power of purpose-built hardware to solve the problem of “where’s that first character at?”

That explains that; it genuinely is a big perf win to let the hardware do the heavy lifting here. But what about the asymptotic problem? Well, as Tim patiently explained to me, guess what? Most VB developers are NOT asking if “aaaab” can be found in “aaaaaaa…”. The vast majority of VB developers are asking is “London” anywhere in this address, or similar problems where the strings are normal human-language strings without a lot of repetitions, and both the source and query strings are short.  Like, very short. Less than 100 characters short. Fits into a cache line short.

Think about it this way; most source strings that VB developers are searching have any given character in them maybe 2% of the time, and so for whatever the start character is of the query string, the skipto step is going to find those 2% of possible matches very quickly. And then the starts step is the vast majority of the time going to very quickly identify false matches. In practice the naive brute force algorithm is almost always O(n + m). 

Moreover, Tim explained to me, any solution that involves allocating a table, preprocessing strings, and so on, is going to take longer to do all that stuff than the blazingly-fast-99.9999%-of-the-time brute force algorithm takes to just give you the answer. The additional complexity is simply not worth it in scenarios that are relevant to VB developers. VB developers are developing line-of-business solutions, and their line of business is not typically genomics; if it is, they have special-purpose libraries for those problems; they’re not using InStr.

And we’re back in 2020. I hope you enjoyed that trip down memory lane.

It turns out that yes, fresh grads and keener interns do complain to senior developers about asymptotic efficiency, and senior developers do say “but nested for loops go brrrrrrr” — yes, they go brrrrrr extremely quickly much of the time, and senior developers know that!

And now I am the senior developer, and I try to be patient with the fresh grads as my mentors were patient with me.


UPDATE: Welcome, Hacker News readers. I always know when I’m linked from Hacker News because of the huge but short-lived spike in traffic. The original author of the meme that inspired this post has weighed in. Thanks for inspiring this trip back to a simpler time!

Bandits, victims and idiots

I don’t enjoy politics, I don’t know enough about it, and my privilege greatly insulates me from its negative effects, and so I don’t talk about it much on this blog. My intention in creating the blog lo these decades ago was to make a friendly, human, competent public face to my team at Microsoft, and to get information out about languages that was not in the official documentation; it was not ever intended to be a soapbox. Only a couple times have I commented on political situations, and today will be the third apparently.

I have been thinking many times these last four years, and much more these last few days about the late Italian economic historian Carlo Cipolla. Not because of his economic theories, of which I know very little, but rather because of his theory of stupidity. You can read the principles in brief for yourself at the link above, or the original paper here, but I can summarize thus:

  • Powerful smart people take actions that benefit both themselves and others.
  • Victims lack the power to protect themselves. They are unable to find actions that benefit themselves, and are victimized to the benefit of others.
  • Bandits take actions that benefit themselves at the expense of victims.
  • Stupid idiots take actions that benefit neither themselves nor others.

These are value-laden terms so let’s be clear here that neither I nor Cipolla are suggesting that victims, bandits or idiots are not intelligent:

  • No matter how intelligent you are and how many precautions you take, you can be victimized by a bandit or an idiot. Victims are not to blame for their victimization. We’ll come back to this in a moment.
  • Bandits are often very intelligent; they just use their skills to victimize others. Whether that’s because they are genuinely not intelligent enough to make a living helping others, or because they are that intelligent but psychologically enjoy being a bandit, or are bandits for other reasons, it doesn’t matter for our purposes. Assume that bandits are extremely intelligent and devious, but motivated by gain.
  • Idiots, ironically, are often very intelligent; a great many idiots have fancy degrees from excellent colleges. As Cipolla points out in his paper, there is no characteristic that identifies idiots other than their inability to act in a way that benefits anyone including themselves. That includes intelligence or lack thereof.

Some key consequences of this model have been on my mind these last few days:

  • Bandits, even the psychopaths, are motivated by self-interest and recognize actions that benefit themselves. You can reason with a bandit, but more importantly, you can reason about a bandit, and therefore you can make use of a bandit. You can make an offer to a powerful bandit and count on them to take it up if it maximizes their gain.
  • You cannot reason with an idiot. You can’t negotiate with them to anyone’s advantage because they will take positions that harm themselves at the same time as they harm others. There are no “useful idiots”; any attempt to use an idiot to benefit yourself will backfire horribly as they manage to find a way for everyone to lose.
  • When the idiots are in power, there is no bright line separating the smart from the victims; rather, there is just a spectrum of more or less power and privilege. Victims by definition lack the power to defend themselves, and the more privileged have no lever to pull to change the course of the idiot, who will act with such brazen disregard for the well-being of everyone including themself that it is hard to devise a strategy.

All this is by way of introduction to say: the position that I am seeing on Twitter and in the media that “soon” is a good time to “re-start the economy” is without question the stupidest, most idiotic position I have ever heard of in my life and that includes “let’s invade Afghanistan for no strategic purpose with no plan on how to ever leave”. There is no way that ends well for anyone, and that includes the billionaires who are temporarily inconvenienced by a slight dip in the flow of cash into their coffers.

I’ll leave you with how Cipolla finishes his essay, because it sums up exactly how I feel at this moment in history.

In a country which is moving downhill […] one notices among those in power an alarming proliferation of the bandits with overtones of stupidity and among those not in power an equally alarming growth in the number of helpless individuals. Such change in the composition of the non-stupid population inevitably strengthens the destructive power of the stupid fraction and makes decline a certainty. And the country goes to Hell.

Working from home

Good Friday afternoon all and welcome to this working-from-home-and-obsessively-washing-hands edition of FAIC.

I am posting today from my recently-transformed spare room which is now apparently my office. Scott Hanselman started a great twitter thread of techies showing off their home workspaces; here’s my humble contribution.

20200320_113545.jpg

 

We have my work Mac hooked up to two medium-sized HP monitors, one of which cost me all of $20 at a tech thrift store. The Windows game machine is under the desk. You’ll note that I finally found a use for my VSTO 2007 book. The keyboard is the new edition of the Microsoft Natural; my original edition Natural is still on my desk at work and is not currently retrievable.

I am particularly pleased with how the desk came out. I made it myself out of 110 year old cedar fence boards; when I bought my house in 1997 the original fence was still in the back yard and falling down, so I disassembled it, removed the nails, let the boards dry out, planed them down, and figured I’d eventually do something with it. I’ve been building stuff out of it ever since, and this project finished off the last of that stock.

Here’s a better shot of the desk.

IMG_9633

The design is my own but obviously it is just a simple mission-style desk. All the joints are dowel and glue; the only metal is the two screws that hold the two drawer knobs on. The finish is just Danish oil with a little extra linseed oil added.

To the right I have a small writing desk:

20200320_123025.jpg

Which as you may have guessed doubles as my 1954 Kenmore Zigzag Automatic Sewing Machine:

20200320_113944.jpg

I have not used it in a while; I used to make kites. I might start again.

The manual for this machine is unintentionally hilarious, but that’s a good topic for another day.

Finally, not shown, I’ve got a futon couch and a few plants to make it cosy.

Stay safe everyone, and hunker down.

Hundred year mistakes

My manager and I got off on a tangent in our most recent one-on-one on the subject of the durability of design mistakes in programming languages. A particular favourite of mine is the worst of the operator precedence problems of C; the story of how it came about is an object lesson in how sometimes gradual evolution produces weird results. Since he was not conversant with all the details, I thought I might write it up and share the story today.

First off, what is the precedence of operators in C? For our purposes today we’ll consider just three operators: &&, & and ==, which I have listed in order of increasing precedence.

What is the problem? Consider:

int x = 0, y = 1, z = 0;
int r = (x & y) == z; // 1
int s = x & (y == z); // 0
int t = x & y == z;   // ?

Remember that before 1999, C had no Boolean type and that the result of a comparison is either zero for false, or one for true.

Is t supposed to equal r or s?

Many people are surprised to find out that t is equal to s! Because == is higher precedence than &, the comparison result is an input to the &, rather than the & result being an input to the comparison.

Put another way: reasonable people think that

x & y == z

should be parsed the same as

x + y == z

but it is not.

What is the origin of this egregious error that has tripped up countless C programmers? Let’s go way back in time to the very early days of C. In those days there was no && operator. Rather, if you wrote

if (x() == y & a() == b)
  consequence;

the compiler would generate code as though you had used the && operator; that is, this had the same semantics as

if (x() == y)
  if (a() == b)
    consequence;

so that a() is not called if the left hand side of the & is false. However, if you wrote

int z = q() & r();

then both sides of the & would be evaluated, and the results would be binary-anded together.

That is, the meaning of & was context sensitive; in the condition of an if or while it meant what we now call &&, the “lazy” form, and everywhere else it meant binary arithmetic, the “eager” form.

However, in either context the & operator was lower precedence than the == operator. We want

if(x() == y & a() == b())

to be

if((x() == y) & (a() == b))

and certainly not

if((x() == (y & a())) == b)

This context-sensitive design was quite rightly criticized as confusing, and so Dennis Ritchie, the designer of C, added the && operator, so that there were now separate operators for bitwise-and and short-circuit-and.

The correct thing to do at this point from a pure language design perspective would have been to make the operator precedence ordering &&, ==, &. This would mean that both

if(x() == y && a() == b())

and

if(x() & a() == y)

would mean exactly what users expected.

However, Ritchie pointed out that doing so would cause a potential breaking change. Any existing program that had the fragment if(a == b & c == d) would remain correct if the precedence order was &&, &, ==, but would become an incorrect program if the operator precedence was changed without also updating it to use &&.

There were several hundred kilobytes of existing C source code in the world at the time. SEVERAL HUNDRED KB. What if you made this change to the compiler and failed to update one of the & to &&, and made an existing program wrong via a precedence error? That’s a potentially disastrous breaking change.

You might say “just search all the source code for that pattern” but this was two years before grep was invented! It was as primitive as can be.

So Ritchie maintained backwards compatibility forever and made the precedence order &&, &, ==, effectively adding a little bomb to C that goes off every time someone treats & as though it parses like +, in order to maintain backwards compatibility with a version of C that only a handful of people ever used.

But wait, it gets worse.

C++, Java, JavaScript, C#, PHP and who knows how many other languages largely copied the operator precedence rules of C, so they all have this bomb in them too. (Swift, Go, Ruby and Python get it right.) Fortunately it is mitigated somewhat in languages that impose type system constraints; in C# it is an error to treat an int as a bool, but still it is vexing to require parentheses where they ought not to be necessary were there justice in the world. (And the problem is also mitigated in more modern languages by providing richer abstractions that obviate the need for frequent bit-twiddling.)

The moral of the story is: The best time to make a breaking change that involves updating existing code is now, because the bad designs that result from maintaining backwards compat unnecessarily can have repercussions for decades, and the amount of code to update is only going to get larger. It was a mistake to not take the breaking change when there were only a few tens of thousands of lines of C code in the world to update. It’s fifty years since this mistake was made, and since it has become embedded in popular successor languages we’ll be dealing with its repercussions for fifty more at least, I’d wager.


UPDATE: The most common feedback I’ve gotten from this article is “you should always use parentheses when it is unclear”. Well, obviously, yes. But that rather misses the point, which is that there is no reason for the novice developer to suppose that the expression x & y == z is under-parenthesized when x + y == z works as expected. The design of a language should lead us to naturally write correct code without having to think “will I be punished for my arrogance in believing that code actually does what it looks like it ought to?” 

Building a fake company

Well this is a first.

Twitter user Plazmaz brought a scam github repository and web site to my attention; see his thread on Twitter for details. It’s a pretty obviously fake site, and there is some evidence in the metadata Plazmaz uncovered that indicates it is a university cybersecurity student project — or, that the scammers want investigators to think that it is.

The reason it was brought to my attention is because the authors of the site used a photo from this blog as part of their scheme! The scammer blog post is here and my original is here.

If this is a university project: please do not teach your students that it is acceptable to use other people’s work in your coursework without attribution or permission. You would not tolerate students passing off someone else’s work as their own in other academic pursuits.

If this is a scam then the fact that they’re using a stolen photo — and one that is easily seen to be stolen! — as part of their scheme might seem like a flaw, but in fact it is a feature of the scam. The scammers are looking for unsophisticated and gullible people who will be easily fooled; making the deception easy to uncover is therefore a filter that excludes people of normal gullibility from the pool of possible victims. This great paper from Microsoft Research goes into the math.

A Picard Easter egg

While watching the first episode of the new Star Trek series just now I noticed a nice little Easter egg:

Admiral Picard (retired) apparently has the same 1982 science fiction book club edition of The Complete Robot handy on his desk as I have on mine:

though frankly, his copy seems to be in better shape than mine.

Anyone know what the book below it is?


UPDATE: My friend Brian R has identified a likely candidate for the second book. It appears to be the Easton Press edition of The Three Musketeers:

1531750680401-2085b780c4dd6aa63d0e3c83faf6d11d10eb96c666772ad3ccf45e9a1158c23f[1]


UPDATE: Later episodes of the series confirm these hypotheses; apparently these were not so much Easter eggs as subtle foreshadowing.

Work and success

One last post for this decade.

There has been some discussion on tech twitter lately on the subject of whether it is possible to be “successful” in the programming business without working long hours. I won’t dignify the posts which started this conversation off — firmly in the “not possible” camp — with a link; you can find them easily enough I suspect.

My first thought upon seeing this discussion was “well that’s just dumb“. The whole thing struck me as basically illogical for two reasons. First, because it was vague; “success” is relative to goals, and everyone has different goals. Second, because any universal statement like “the only way to achieve success in programming is by working long hours” can be refuted by a single counterexample and I am one! My career has been a success so far; I’ve worked on interesting technology, mentored students, made friends along the way, and been well compensated. But I have always worked long hours very rarely; only a handful of times in 23 years.

Someone said something dumb on the internet, the false universal statement was directly refuted by me in a devastatingly logical manner just now, and we can all move on, right?

Well, no.

My refutation — my personal, anecdotal refutation — answers in the affirmative the question “Is it possible for any one computer programmer, anywhere in the world right now, to be successful without working long hours?” but that is not an interesting or relevant question. My first thought was also pretty dumb.

Can we come up with some better questions? Let’s give it a shot. I’ll start with the personal and move to the general.


We’ve seen that long hours were not a necessary precondition to my success. What were the sufficient preconditions?

I was born into a middle-class, educated family in Canada. I had an excellent public education with teachers who were experts in their fields and genuinely cared about their students. I used family connections to get good high school jobs with strong mentors. Scholarships, internships, a supportive family and some talent for math allowed me to graduate from university with in-demand skills and no debt, with a career waiting for me, not just a job. I’ve been in good health my whole life. When I had problems I had access to professionals who helped me, and who were largely paid by insurance.

Did I work throughout all of that? Sure! Was it always easy? No! But my privileged background enabled me to transform working reasonable hours at a desk into success.

Now it is perhaps more clear why my “refutation” was so dumb, and that brings us to our next better question:


If we subtract some of those privileges, does it become more and more likely that working long hours becomes a necessary precondition for success in our business?

If you’re starting on a harder difficulty level — starting from poverty, without industry or academic connections, if you’re self-taught, if you’re facing the headwinds of discrimination, prejudice or harassment, if you have legal or medical or financial or family problems to solve on top of work problems — there are not that many knobs you can turn that increase your chance of success. It seems reasonable that “work more hours” is one of those knobs you can turn much more easily than “get more industry contacts”.

The original statement is maybe a little too strong, but what if we weaken it a bit? Maybe to something like “working long hours is a good idea in this business because it greatly increases your chances of success, particularly if you’re facing a headwind.” What if we charitably read the original statement more like that?

This is a statement that might be true or it might be false. We could do research to find out — and indeed, there is some research to suggest that there is not a clear causation between working more hours and being more successful. But the point here is that the weakened statement is at least not immediately refutable. 

This then leads us from a question about how the world is to how it ought to be, but I’m going to come back to that one. Before that I want to dig in a bit more to the original statement, not from the point of view of correctness, or even plausibility, but from the point of view of who benefits by making the statement.


Suppose we all take to heart the advice that we should be working longer to achieve success. Who benefits?

I don’t know the people involved, and I don’t like to impute motives to people I don’t know. I encourage people to read charitably. But I am having a hard time believing the apologia I outlined in the preceding section was intended. The intended call to action here was not “let’s all think about how structural issues in our economy and society incent workers from less privileged backgrounds to work longer hours for the same pay.Should we think about that? Yes. But that was not the point. The point being made was a lot simpler.

The undeniable subtext to “you need to work crazy hours to succeed” is “anyone not achieving success has their laziness to blame; they should have worked harder, and you don’t want to be like them, do you?”

That is propaganda. When you say the quiet part out loud, it sounds more like “the income of the idle rich depends on capturing the value produced by the labours of everyone else, so make sure you are always producing value that they can capture. Maybe they will let you see some of that value, someday.” 

Why would anyone choose to produce value to be confiscated by billionaires? Incentives matter and the powerful control the incentives. Success is the carrot; poverty and/or crippling debt is the stick.

Those afforded less privilege get more and more of the stick. If hard work and long hours could be consistently transformed into “success”, then my friends and family who are teachers, nurses, social workers and factory workers would be far more successful than I am. They definitely work both longer and harder than I do, but they have far less ability to transform that work into success.

That to me is the real reason to push back on the notion that long hours and hard work are a necessary precondition of success: not because it is false but because it is propaganda in service of weakening further the less privileged. “It is proper and desirable to weaken the already-weak in order to further strengthen the already-strong” is as good a working definition of “evil” as you’re likely to find.

The original statement isn’t helpful advice. It isn’t a rueful commentary on disparity in the economy. It’s a call to produce more profit now in return for little more than a vague promise of possible future success. 


Should long hours be a precondition for success for anyone irrespective of their privileges?

First off, I would like to see a world where everyone started with a stable home, food on the table, a high quality education, and so on, and I believe we should be working towards that end as a society, and as a profession.

We’re not there, and I don’t know how to get there. Worse, there are powerful forces that prefer increasing disparities rather than reducing them.

Software is in many ways unique. It’s the reification of algorithmic thought. It has effectively zero marginal costs. The industry is broad and affords contributions from people at many skill levels and often irrespective of location. The tools that we build amplify other’s abilities. And we build better tools for the world when the builders reflect the diversity of that world.

I would much rather see a world in which anyone with the interest in this work could be as successful as I have been, than this world where the majority have to sacrifice extra time and energy in the service of profits they don’t share in.

Achieving that will be hard, and like I said, I don’t know how to effect a structural change of this magnitude. But we can at least start by recognizing propaganda when we see it, and calling it out.


I hate to end the decade on my blog on such a down note, but 2020 is going to be hard for a lot of people, and we are all going to hear a lot of propaganda. Watch out for it, and don’t be fooled by it.

If you’re successful, that’s great; I am very much in favour of success. See if you can use some of your success in 2020 to increase the chances for people who were not afforded all the privileges that turned your work into that success.

Happy New Year all; I hope we all do good work that leads to success in the coming year. We’ll pick up with some more fabulous adventures in coding in 2020.


Thanks to my friend @editorlisaquinn for her invaluable assistance in helping me clarify my thoughts for this post.