I’ve recently been looking into a fascinating corner of mathematics that at first glance appears a little bit silly, but actually has far-reaching applications, from physics to numerical methods to machine learning. I thought I’d share what I’ve learned over the next few episodes.
I assume you recall what a complex number is, but perhaps not all of the details. A complex number is usually introduced as a pair of real numbers
(a, b), where
a is called the “real part” and
b is called the “imaginary part”.
A brief aside: it has always bugged me that these labels are unnecessarily value-laden. There is no particular “reality” that is associated with the real part; it is every bit as “imaginary” as the imaginary part. They might as well be called the “rezrov part” and the “gnusto part”, but we’re stuck with “real” and “imaginary”. Moving on.
It would be perfectly sensible to notate these as a pair
(a, b) — and indeed, when we characterize complex numbers as points on the complex plane, we often do. But it is also convenient to notate complex numbers in an algebraic form
a+bi, where the
i indicates the imaginary part.
Finally, we have well-known rules for adding and multiplying complex numbers. The fundamental rule is that
(0+1i)2 is equal to
-1+0i, and the rest of the rules follow from there. For complex numbers:
(a+bi) + (c+di) = (a+c) + (b+d)i (a+bi)(c+di) = (ac-bd) + (ad+bc)i
Division is a little bit tricky and I won’t write it all out here; look it up if you do not remember. We can similarly go on to derive rules for complex exponentiation, trigonometry, and so on.
Let’s switch gears slightly. It is very common in mathematics, and especially calculus, to reason about a quantity “plus a little bit”. That is, any old quantity, plus some positive amount that is not zero, but is much smaller than one, and really, really small compared to the original quantity. What “really, really small” means is of course dependent on context; maybe one-millionth of the original quantity is small. Maybe one-millionth is still pretty big, but one-trillionth, or one part in a googolplex, or whatever, is “really small” in our context. I’m being deliberately vague and hand-wavy here; just run with it.
Let’s call our really small positive quantity
ε, which is the lower-case Greek letter epsilon; it is traditionally used in calculus to represent a very small quantity. Let’s suppose our value is
a, and we have a very small amount added to it, so that would be
a+ε. Hmm. What if we doubled that quantity? Plainly that would be
2a+2ε. So scalar multiplication works as you’d expect. It seems reasonable that if we can have
2ε then we can have
3ε or for that matter
bε for any real value of
b. So once again we have an
(a, b) pair of numbers representing a quantity, which we notate as
a+bε. What are the algebraic rules of this thing? It certainly makes sense that addition should work as we expect, the same as complex addition:
(a+bε) + (c+dε) = (a+c) + (b+d)ε
This should match our intuitions. The sum of “
a plus a bit” and “
c plus a bit” should be “
a + c plus a bit” if there is justice in the world. What about multiplication? Let’s just multiply it out using ordinary algebra:
(a+bε)(c+dε) = ac + (ad+bc)ε + bdε2
We seem to have introduced an
ε2 term there. What are we going to do with that? Remember,
ε is not zero but is very, very small indeed, so that thing squared must be super tiny. So tiny that we can just ignore it entirely! (Remember, I am hand-waving egregiously here.) Let’s just say that
ε2 is by definition zero, just as we said that
i2 is by definition -1, and see what happens. So we have:
(a+bε)(c+dε) = ac + (ad+bc)ε
Again, this should match our intuitions. The product of something a bit bigger than
a with something a bit bigger than
c should be a bit bigger than the product of
What about division? Division by zero is illegal, and division by zero plus a tiny amount I think should also be illegal. Seems reasonable. What then is
c is not zero? Well, if it is not zero then we can multiply top and bottom by
a+bε (c-dε)(a+bε) ac + (bc-da)ε a bc-da ———— = ——————————— = —————————————— = - + ————— ε c+dε (c-dε)(c+dε) c2 c c2
Once again our intuition is satisfied: the quotient of “something close to
a” divided by “something close to
c” is close to the quotient of
This is a pretty neat number system; it is called the dual number system and it was introduced to the world in 1873 by the English mathematician William Clifford. But so far it might look like it is just a curiosity; however, it is extremely useful.
Next time on FAIC: we will implement the dual number system in C#, and discover a surprising property.
Your multiplication rule for complex numbers is missing a ‘c’ I think.
(a+bi)(c+di) = (ac-bd) + (ad+bC)i
“hey might as well be called the “rezrov part” and the “gnusto part”.”
Are you? Are you really?
“There is no particular “reality” that is associated with the real part”
Actually there is, when you think that R, the ‘set of real numbers’ is just the subset of complex numbers (a+bi) that happen to have b=0, i.e. complex numbers with only ‘real part’.
Of course, you could always argue why R are called ‘real numbers’ and not ‘rezrov numbers’, but I guess that would amount to asking why you are called Eric, and not Mahatma 😉
(OT rant) In fact, I’ve always found the ‘real’ part of complex numbers the most ‘real’ part of the whole concept. I mean, I’ve always had a gut-feeling that complex numbers were an artificial mathematical construct; very useful, indeed, but artificial nonetheless. I mean, you are first taught that square root of negative is ‘undefined by definition’; then someone says, ‘ok, let’s call i = sqrt(-1)’. In my (arguably short) mind this is like Einstein defining ‘p’ as ‘an imaginary particle moving faster-than-light’; and then deriving from there some wonderful physical theory that explains everything. I cannot deny its usefulness; but I can’t avoid the feeling that something is not right.
And I can’t pass on the ocassion to welcome you back to blogging. I’ve really missed your posts. Keep them flowing please!!!
In the past, I’d have agreed with you. It is confounding when we cannot obviously see where some concept of a number comes from. How can we just make it up, after all? Why would we imagine such a number? Great that it’s useful and all, but why not choose a different tool?
But there’s a problem. ‘Imaginary’ numbers are real. They make the analysis of electromagnetism possible in ways we didn’t understand before their invention. They are necessary to even writ down the math that governs quantum mechanics – It’s not possible to derive the the Schrodinger equation without the ‘imaginary’ constant.
I think that’s the point Eric is getting at. The name ‘imaginary’ has stuck, and it’s a horrible misnomer. We would be better off if we didn’t call it that. We often gloss over the square-root problem because the mathematics are far too complicated to explain in grade school when we introduce the square-root operation. It’s something of a failing that we call it ‘impossible’ instead of ‘something that we can’t use right now’. But this is true of a lot of things. We don’t teach calculus that early, when it indeed solves many (even simple) problems more elegantly. I just wish we would stop calling things impossible when they so rarely are.
Eric: Looking forward to the series! I hadn’t heard of dual numbers before this post. I’m excited to see where it takes us.
One thing to keep in mind about complex numbers is that they weren’t invented out of thin air. Rather, they were forced upon mathematicians when they finally figured out the cubic formula — the analog to the quadratic formula used to find zeros of the polynomial.
The problem with the cubic formula is the existence of certain pesky cubics that have the gall of having three real solutions — but, in order to use the formula to find them, you have to pretend to take the square root of a negative number (hence “imagining” that it’s possible) — but if you treat that with all the rules of arithmetic you’d expect, and carry it through, these “imaginary” quantities eventually cancel out, and you’re left with the “real” answers.
This was first done in the 1500s or 1600s (I can’t remember exactly when), and it took mathematicians a couple hundred more years before they started treating complex numbers as numbers in their own right — and in the process, they discovered that they somehow “complete” the real numbers (and thus decided to call these the “algebraic closure” of the reals) and found they simplify a significant number of problems.
I am reminded of the idea of infinitesimals. https://en.wikipedia.org/wiki/Infinitesimal
Pingback: Fixing Random, bonus episode 3 | Fabulous adventures in coding