Order of operations considered harmful

[The title is a half-joking reference to Edsger Dijkstra’s classic paper, Go To Statement Considered Harmful; see here for more context.]

Everyone is probably familiar with the so-called “order of operations”, which is

a collection of rules that reflect conventions about which procedures to perform first in order to evaluate a given mathematical expression.

(the above quote is from the Wikipedia page on Order of Operations). If you grew up in the US, like me, you might have memorized the acronym PEMDAS; I have recently learned that people in other parts of the world use other acronyms, like BEDMAS. In any case, these mnemonics help you remember the order in which you should conventionally perform various arithmetic operations, right?

For example:

\begin{array}{rl} & 1 + 3 \times (5 - 3)^4/2 \\ = & 1 + 3 \times 2^4 / 2 \\ = & 1 + 3 \times 16 / 2 \\ = & 1 + 48 / 2 \\ = & 1 + 24 \\ = & 25.\end{array}

Makes sense, right? We did the stuff in parentheses first, then the exponent, then the multiplication and division (from left to right), then the addition.

OK, pop quiz: what is the value of

3 + 0 \times 37^{984}?

Of course it is 3, you say. Aha, but did you follow the order of operations? I thought not. Supposedly the order of operations tells you that you have to perform the exponentiation before the multiplication, but I am willing to bet that you skipped straight over the exponentiation and did the multiplication first! Really, you should be ashamed of yourself.

Another pop quiz: what is the value of

2 \times 7 + 3 \times 5 + 4 \times 6?

Well, let’s see:

\begin{array}{rl}& 2 \times 7 + 3 \times 5 + 4 \times 6 \\[0.5em] =& 14 + 3 \times 5 + 4 \times 6 \\[0.5em] =& 14 + 15 + 4 \times 6 \\[0.5em] =& 29 + 4 \times 6 \\[0.5em] =& 29 + 24 \\[0.5em] =& 53\end{array}

Easy peasy. But wait, did you notice what I did there? I did one of the additions before I did the last multiplication! According to the “order of operations” this is not allowed; you are supposed to perform multiplication before addition, right??

One more quiz: solve for y in the equation

20 = 2 + 3y.

Of course we can proceed by subtracting 2 from both sides, resulting in 18 = 3y, and then dividing both sides by 3, finding that y = 6 is the unique solution.

How did we know not to add the 2 and the 3, resulting in the bogus equation 20 = 5y? Because of the order of operations, of course… but wait, what does the order of operations even mean here? Did you notice that we never actually performed the multiplication or addition at all?

My point is this: casting the “order of operations” in performative terms—as in, the order we should perform the operations—is grossly misleading.

You might think all my examples can be explained away easily enough—and I agree—but if you take literally the idea of the order of operations telling us what order to perform the operations (as many students will, and do), they don’t make sense. In fact, I would argue that saying the order of operations is about the order of performing the operations is worse than misleading, it is just plain wrong.

So what is it, really?

I made the title of this post provocative on purpose, but of course I am not actually arguing against the order of operations in and of itself. We certainly do need to agree on whether 1 + 2 \times 3 should be 7 or 9. But we need a better way of explaining it than saying it is the “order in which we perform the operations”.

Any mathematical expression is fundamentally a tree, where each operation is a node in the tree with the things it operates on as subtrees. For example, consider the example expression 1 + 3 \times (5 - 3)^4/2 from the beginning of the post. As a tree, it looks like this:

This tells us that at the very top level, the expression consists of an addition, specifically, the addition of the number 1 and some other expression; that other expression is a division (quick check: do you see why the division should go here, and not the multiplication?), and so on.

However, pretty much all the writing systems we humans have developed are linear, that is, they consist of a sequence of symbols one after the other. But when you go to write down a tree as a linear sequence of symbols you run into problems. For example, which tree does 3 \mathbin{\triangle} 5 \mathbin{\square} 2 represent?

Without further information there’s no way to tell; the expression 3 \mathbin{\triangle} 5 \mathbin{\square} 2 is ambiguous.

There are two ways to resolve such ambiguity. One is just to add parentheses around every operation. For example, when fully parenthesized, the example expression from before looks like this:

(1 + ((3 \times ((5 - 3)^4))/2))

With one set of parentheses for every tree node, this is an unambiguous way to represent a tree as a linear sequence of symbols. For example, in the case of 3\mathbin{\triangle} 5 \mathbin{\square} 2, we would be forced to write either (3\mathbin{\triangle} (5 \mathbin{\square} 2)) or ((3\mathbin{\triangle} 5) \mathbin{\square} 2), fully specifying which tree we mean.

But, of course, fully parenthesized expressions are quite tedious to read and write. This leads to the second method for resolving ambiguity: come up with some conventions that specify how to resolve ambiguity when it arises. For example, if we had a convention that says \square has a higher precedence than (i.e. “comes before”, i.e. “binds tighter than”) \triangle, then 3 \mathbin{\triangle} 5 \mathbin{\square} 2 is no longer ambiguous: it must mean the left-hand tree (with the \triangle at the top), and if we wanted the other tree we would have to use explicit parentheses, as in (3 \mathbin{\triangle} 5) \mathbin{\square} 2.

Of course, this is exactly what the “order of operations” is: a set of conventions that tells us how to interpret otherwise ambiguous linear expressions as unambiguous trees. In particular, the operations that we usually talk of being “performed first” really just have higher precedence than the other operations. Think of the operations as “magnets” trying to attract things to them; higher precedence means stronger magnets.

I wouldn’t necessarily phrase it this way to a student, though. I have never taught elementary or middle school math, or whichever level it is where this is introduced, but I think if I did I would just tell them:

The order of operations tells us where to put parentheses.

The nice thing is that if you think about adding parentheses instead of performing operations, then the order of operations really does tell you what order to do things: first, add parentheses around any exponentiations; then add parentheses around any multiplications and divisions from left to right; finally, add parentheses around any additions and subtractions from left to right. Of course, if the expression already has any parentheses to begin with, you should leave them alone. This also explains what is “different” about the parentheses/brackets at the beginning of PEMDAS/BEDMAS: you can’t “perform” parentheses like you can “perform” the other operations. But from this new point of view, you don’t even need to include them in the mnemonic: they are different because the whole point is to add more of them. Of course, as students become familiar with this process, at some point they no longer need to actually write out all the parentheses, they can just do it in their heads.

What next?

To be honest I am not sure exactly what the takeaway should be here. I do think we are doing students a disservice by teaching them the misleading idea that the “order of operations” is about the order in which to perform the operations, and sadly it seems this idea is quite widespread. But I am not exactly sure what should be done about it. If you are a mathematics educator—either one who teaches students about the order of operations, or one who has witnessed the effects of their understanding on later subjects—I’d love to hear your responses and ideas!

Posted in arithmetic, teaching | Tagged , , , , , , | 4 Comments

A combinatorial proof: counting bad functions

In a previous post we derived the following expression:

\displaystyle S = \sum_{i=0}^n (-1)^i \binom{n}{i} (k+(n-i))^n.

We are trying to show that S = n!, in order to show that starting with a sequence of n+1 consecutive nth powers and repeatedly taking successive differences will always result in n!. Last time we also made a picture of a typical function from a set of size n to a set of size k+n, which looks like this:

Here’s the plan of action from this point:

  • Start with all the functions from a set of size n to a set of size k+n (i.e. all possible pictures like the above).
  • Subtract off the “bad” functions which don’t correspond to matchings between the blue dots.
  • We will be left with only matchings, of which we know there are n!.
  • Find that the right expression to count the functions via this procedure is exactly S.
  • Rejoice!

Bad functions

What makes a function “bad”? Based on the discussion in my previous post, it seems that there are two things that can make a function “bad”, that is, prevent it from being a matching between the blue sets:

  • An arrow can point to one of the orange dots. This means there won’t be enough arrows left to cover all of the blue dots.
  • Two arrows can “collide”, that is, point to the same blue dot on the right-hand side (if two arrows point to the same orange dot then it’s already bad because of the first reason). This means it can’t be a valid matching, because a matching has to match each blue dot on the left with exactly one blue dot on the right and vice versa.

The example function above has both these problems:

From another point of view, however, there is really only one thing that makes a function bad, which encompasses both points above: a function is bad if and only if there is at least one “missing” blue dot, that is, a blue dot on the right-hand side which is not pointed to by an arrow. On the one hand, if there is a missing blue dot, then the function obviously can’t be a matching, since the missing blue dot is unmatched. On the other hand, if there are no missing dots, then each dot must be matched with a unique dot on the left-hand side, since there are the same number of blue dots on both sides. (In fact, if a blue dot is missing, it will always be caused by one of the two problems listed above—there will either be an arrow pointing to an orange dot, or two arrows that collide.) Here is the same function again, but with the missing elements marked as “bad” by fading them out a bit:

Our goal is now to “subtract off” these bad functions, that is, the functions with at least one missing blue dot.

Counting bad functions

Let’s number the blue dots from 1 to n (with 1 at the top), and define M_i as the set of all functions from n to k+n where blue dot i is missing (M is for “Missing”).

So, for example, the function from before is an element of M_3, since element 3 is missing:

How big is M_i? Well, if we just leave out the missing blue dot i (and renumber the blue dots with numbers greater than i), we are left with a function from n to k+(n-1); conversely, every function from n to k+(n-1) can be made into a function from n to k+n with blue dot i missing, just by inserting a new dot numbered i (and again renumbering the dots with numbers \geq i).

So, since we can put the function in M_i in one-to-one correspondence with all the functions from a set of size n to a set of size k+(n-1), the size of M_i is just the number of such functions, that is, (k+(n-1))^n.

At this point, however, we run into a snag: as you can see, the example function we’ve been using is also an element of M_1, since 1 is missing too. And this is why we can’t simply subtract each of the M_i and call it a day: the sets overlap, so if we subtract all of them we would be subtracting functions multiple times.

Hmm, subsets that overlap, and we want to count the things that are in none of the subsets… this is starting to sound familiar! We need some PIE, of course. In my next post we’ll start putting some of the pieces together!

Posted in arithmetic, combinatorics, proof | Tagged , , , , , | Leave a comment

A combinatorial proof: functions and matchings

We’re trying to prove the following equality (see my previous post for a recap of the story so far):

\displaystyle \sum_{i=0}^n (-1)^{n-i} \binom{n}{i} (k+i)^n = n!

In particular we’re trying to show that the two sides of this equation correspond to two different ways to count the same set of mathematical objects. We already know that the right-hand side counts the number of permutations of a set of size n, or put another way, the number of one-to-one matchings between two different sets of size n. Somehow, then, we need to show that the left side of the equation is just a complicated way to count the same thing.

Cosmetic surgery

Let’s call the left-hand side of the equation S, that is,

\displaystyle S = \sum_{i=0}^n (-1)^{n-i} \binom{n}{i} (k+i)^n.

The first thing I want to do is switch the direction of the summation. That is, everywhere we currently have i I want to replace it with n-i, and vice versa. So 0 will switch places with n, and 1 will switch with n-1, and so on. Ultimately we will still have the same values of i, just in the reverse order. Of course, the order in which we add up a bunch of things doesn’t matter (since addition is commutative), so this won’t change the value of S at all.

\displaystyle S = \sum_{i=0}^n (-1)^i \binom{n}{n-i} (k+(n-i))^n

Finally, note that \binom{n}{n-i} = \binom{n}{i} (there are the same number of ways to choose n-i things out of n as there are to choose i things to exclude), so we can write

\displaystyle S = \sum_{i=0}^n (-1)^i \binom{n}{i} (k+(n-i))^n.

Functions

When i = 0, S involves the term (k+n)^n. Let’s think about what that counts. I explained in a previous post that a^b is the number of functions from a set of size b to a set of size a (as a reminder, this is because the function gets to independently choose where to map each input value—so there are b choices for each of the a inputs, for a total of b \cdot b \cdot \dots \cdot b = b^a choices). So let’s draw a typical function from a set of size n to a set of size k+n:

For this example I chose n = 5 and k = 3. The blue dots on the left represent the set of size n. The dots on the right represent the set of size k+n: I made n of them blue to emphasize that they “correspond” to the blue dots on the left, and the remaining k “extra” dots are orange.

The arrows indicate what the function does. To be a valid function, it has to be defined for each element of the input set; that is, in the picture, there has to be exactly one arrow coming out of each dot on the left. However, the arrows can go anywhere. Note that in the example above, one arrow points to an orange dot, and two other arrows point the same blue dot (we could say they “collide”).

Some of these functions are actually matchings between the blue dots on the left and right, that is, every arrow points to a blue dot, and none of the arrows collide. For example:

We already know that the right-hand side of the equation, n!, is the number of such matchings. And we’re going to show that the left-hand side of the equation is what you get if you start with all the possible functions from n to n+k, and then subtract the ones that aren’t matchings. Of course, these should give you the same result! If you like, between now and my next post, you can try to work out the details for yourself.

Posted in arithmetic, combinatorics, proof | Tagged , , , , , | 5 Comments

A new tricubic sum for three!

3 = 569936821221962380720^3 + (-569936821113563493509)^3 + (-472715493453327032)^3

Here’s a nice Numberphile interview with Andrew Booker about the new discovery. They also talk about Hilbert’s tenth problem, undecidability, the reasons for doing computer searches like this, the role of science communication (such as Numberphile) in spurring discovery, and other things.

See my previous blog post for why this is interesting!

Posted in number theory | Tagged , , , , | Leave a comment

Human Randomness

Randomness is hard for humans

It is notoriously difficult for humans to come up with truly random numbers. I don’t really have any data I can point to in particular, but there are quite a few well-known phenomena that show how bad humans are at picking random numbers:

  • When asked to pick a random number between 1 and 10, lots of people pick 7, presumably because it “seems the most random” (perhaps because it is prime). But of course if you always pick 7 when asked to pick a random number, it is not random at all!

  • If asked to generate a sequence of outcomes from tosses of a fair coin (like Heads, Tails, Tails, Heads, …), most people will generate sequences which do not have enough strings of repeated outcomes, such as three heads in a row or four tails in a row. Such repeated sequences don’t “seem random”, but of course a truly random sequence will have them sometimes. In fact, the chance of getting four heads in a row is only 1/16—so such sequences should actually occur pretty often.

  • In fact, there are computer programs that can repeatedly guess whether you are going to pick heads or tails next, and end up being correct more than 50% of the time! It seems like the computer is somehow clairvoyant or reading your mind, but really it is just exploiting the fact that your sequence of picks is not truly random. Here’s one you can try—can you win?

A human way to generate randomness?

I keep wondering whether there is some way to access a source of randomness as a human. As in, the next time someone asks you to pick a random number, you could use your method to pick a truly random number. Of course, you could use a computer—in fact, there are some nice apps, like this one from random.org, for generating random numbers. But I am interested in a method that can be done without aids.

You could carry around a coin with you everywhere, or some dice. But what if you have forgotten yours, or don’t have access to them? Again, I am hoping for something that can be done without any aids.

Methods for generating “truly” random numbers (as opposed to pseudorandom numbers) often sample from some noisy source and throw away all but the lowest-order information. So, for example, you could count the number of times your heart beats in one minute and decide it represents “heads” if the number is even and “tails” if the number is odd. However, one bit per minute is super slow (if you needed to pick a number from 1-10 it would take you at least four minutes). It also depends on having something to count sixty seconds for you.

Can anyone think of any better ideas? Crazy ideas are OK too. =)

Posted in Uncategorized | Tagged , , , | 24 Comments

Sums of cubes: multiple representations

I’m continuing a short series of posts on representing numbers as a sum of three cubes; previous posts are 33 is the sum of three cubes and More sums of three cubes.

We now know that every number less than 100 which is not equivalent to (\pm 4) \pmod 9 can be written as a sum of three cubes. But for some of them we only know of one such representation, which can involve very large numbers. Is it possible that there are yet more ways, using even larger numbers?

Tricubic sums for 0

0 can of course be written as 0^3 + 0^3 + 0^3, but in fact it can be written as a sum of three cubes in infinitely many ways: pick any number c, then 0 = 0^3 + c^3 + (-c)^3. In fact, we know these are must be the only tricubic sums for 0. For suppose there were some nonzero x, y, and z such that 0 = x^3 + y^3 + z^3. In order to get a sum of zero, one of them has to be negative and two positive, or vice versa. If two are negative, then just negate the entire equation so that two are positive and one negative. Then move the negative one (say, z) to the other side of the equation, giving us x^3 + y^3 = z^3—but this would contradict Fermat’s Last Theorem!1

Tricubic sums for 1

1 can of course be written in infinitely many ways as 1^3 + c^3 + (-c)^3. However, there are also other ways. For example, as you can easily check for yourself,

1 = 9^3 + (-6)^3 + (-8)^3.

In fact, for any integer value of b, it turns out that

1 = (9b^4)^3+(3b-9b^4)^3+(1-9b^3)^3.

Let’s prove it! Remember that by the Binomial Theorem, (x+y)^3 = x^3 + 3x^2y + 3xy^2 + y^3. There are a lot of exponents and such to keep track of, but ultimately the algebra is not hard, just somewhat tedious:

\begin{array}{cl} & (9b^4)^3+(3b-9b^4)^3+(1-9b^3)^3 \\[1em] =& (9b^4)^3 + \big[(3b)^3 + 3 (3b)^2 (-9b^4) + 3 (3b) (-9b^4)^2 + (-9b^4)^3\big] + \big[1^3 + 3(1^2)(-9b^3) + 3(1)(-9b^3)^2 + (-9b^3)^3\big] \\[1em] =& 3^6 b^{12} + 3^3 b^3 - 3^5 b^6 + 3^6 b^9 - 3^6 b^{12} + 1 - 3^3 b^3 + 3^5 b^6 - 3^6 b^9 \\[1em] =& 1\end{array}

If you look carefully, you’ll see that everything cancels in the last step except for the single 1!

I got the example 1 = 9^3 + (-6)^3 + (-8)^3 by choosing b = 1. As another example, if we pick b = 47, then we get

1 = 43917129^3 + (-43916988)^3 + (-934406)^3

which you can verify for yourself!

Tricubic sums for 2

There are also infinitely many tricubic sums for 2: for any value of c,

2 = (1 + 6c^3)^3 + (1 - 6c^3)^3 + (-6c^2)^3

I’ll let you prove this one yourself. Just expand using the Binomial Theorem and watch everything magically cancel!

Tricubic sums for 3 and beyond

It turns out that as of this writing (September 2019), we only know of two tricubic sums for 3, both rather small:

3 = 1^3 + 1^3 + 1^3 = 4^3 + 4^3 + (-5)^3.

Are there any more? No one knows. However, in 1992, Roger Heath-Brown published a paper in which he conjectured that every number not of the form \pm 4 \pmod 9 has not just one but infinitely many tricubic sums! If true, this means there are infinitely many more tricubic sums for 3 in particular—but if so they involve rather large numbers. So rather than tackling 114 (the current smallest number for which we don’t know a tricubic sum), the next big prize might be to find another tricubic sum for the humble 3. Indeed, Andrew Booker says as much in his Numberphile interview.


  1. We don’t actually need to invoke Fermat’s Last Theorem in full generality—the n=3 case of Fermat’s Last Theorem was first proved by Euler in 1770.

Posted in number theory | Tagged , , , , | 1 Comment

More sums of three cubes

About six months ago I wrote about the recent discovery that 33 can be written as the sum of three cubes. At that time, the only remaining number less than 100 whose status was still unknown was 42. And just recently that, too, has fallen: thanks to Andrew Booker and Andrew Sutherland, we now know that

\displaystyle 42=(-80538738812075974)^3+80435758145817515^3+12602123297335631^3.

Again, this was very difficult to find (it required 1.3 million hours of computer time!) but very easy to verify: you can plug the above numbers into a computer yourself and see that you get 42. In fact, I’ve set up some links here so you can try it yourself using either Wolfram Alpha or Python. (Sadly, most graphing calculators don’t have enough precision to represent these numbers exactly. But seriously, click one of the links above and try it out!)

I still don’t understand the math behind the algorithm Booker and Sutherland used (I tried reading the paper but quickly got lost, and my sense is that I would probably have to spend at least several months doing background reading to be able to understand it). But this time around I’ve been learning more about the sums of three cubes problem itself. Today I’ll write a bit about which numbers actually have a hope of being written as a sum of three cubes, and in a future post I plan to write about numbers that have multiple representations as sums of cubes.

For the rest of this post I’m going to use the word tricubic to refer to numbers which can be written as a sum of three cubes. So, for example, we now know that 42 is tricubic. I’ll also say tricubic sum to refer to three cubes that add up to a given number, for example, “1^3 + 2^3 + 3^3 is a tricubic sum for 36”.

Which numbers are tricubic?

Some numbers definitely aren’t tricubic: for example, 13. How did I know that? The key is to look at the remainders of cubes \pmod 9:

\begin{array}{rcl}0^3 &=& 0 \\[0.5em] 1^3 &=& 1 \\[0.5em] 2^3 &=& 8 \equiv (-1) \pmod 9 \\[0.5em] 3^3 &=& 27 \equiv 0 \pmod 9 \\[0.5em] 4^3 &=& 64 \equiv 1 \pmod 9 \\[0.5em] 5^3 &=& 8 \equiv (-1) \pmod 9 \\[0.5em] 6^3 &=& 216 \equiv 0 \pmod 9 \\[0.5em] 7^3 &=& 343 \equiv 1 \pmod 9 \\[0.5em] 8^3 &=& 512 \equiv (-1) \pmod 9\end{array}

Notice that in the above table we only ever get remainders of -1, 0, or 1. And since every integer is equivalent, \pmod 9, to one of the numbers from 0 through 8, this table actually tells us everything we need to know: every integer cube will be either a multiple of 9, one more than a multiple of 9, or one less. By adding up three remainders which are all -1, 0, or 1, you can get any remainder between -3 and 3, inclusive. But that leaves out exactly two remainders you can’t get: 4 and 5. That’s how I knew 13 is not tricubic: 13 = 9 + 4 is four more than a multiple of 9.

OK, so now we know that any number either 4 or 5 more than a multiple of 9 is not tricubic. What about other numbers? Maybe if we look at remainders modulo some other divisor, say, 347, we can rule out other numbers in a similar way? Actually, no! So far it seems like this simple restriction is the only restriction on tricubic numbers: the conjecture is that every number not of the form 9k + \{4,5\} is tricubic—but right now this is only a conjecture; no one has yet managed to prove it!

So, in general, numbers fall into three groups:

  • Numbers of the form 9k + 4 or 9k + 5 which we know are not tricubic: \{4, 5, 13, 14, 22, 23, 31, 32, \dots\}.
  • Numbers which we know are tricubic, because we know of at least one tricubic sum for them: \{0, 1, 2, 3, 6, 7, 8, \dots\}.
  • Numbers which we think are probably tricubic, but we currently don’t actually know: \{114, 165, 390, 579, 627, \dots\}

Where do we go from here?

Progress on this problem could theoretically come in a lot of different ways.

First, someone could prove that there are additional numbers which are not tricubic. But as far as I can tell, everyone thinks this is pretty unlikely.

Someone could find a way to write some additional numbers as sums of three cubes. For example, the smallest number whose tricubic status we don’t currently know is 114; someone might find a way to write 114 as a sum of three cubes. However, the amount of computer time necessary to search for these may get prohibitively large: as I mentioned earlier, finding a tricubic sum for 42 required 1.3 million hours of computer time. Note, however, that finding 33 and 42 was possible specifically because Andrew Booker came up with a better algorithm than what had been used before. So perhaps someone will come up with an even better algorithm that makes it feasible to search for larger tricubic sums. At some point, however, people will lose interest: finding bigger and bigger tricubic sums just gets boring after a while.

Someone could prove that every number not of the form 9k + \{4,5\} is tricubic by finding an algorithm which takes a number n as input, and spits out a tricubic sum for n. But wait, you ask, don’t we already have that? Weren’t 33 and 42 found using an algorithm? Well, yes, but the search algorithm used to find tricubic sums for 33 and 42 was not guaranteed to find anything. You give it an upper limit for the size of the cubes involved, and it will search everything below that, but there is no guarantee that it will find what it is looking for. You could easily modify the program to run as follows: given a number n for which we wish to find a tricubic sum, start by searching for tricubic sums under a certain upper bound; if nothing is found, increase the upper bound and try again. If nothing is found the second time, increase the upper bound yet again, and so on. This is called a semi-decision procedure: if a tricubic sum exists for n, this program will eventually find it, but if no tricubic sum exists then it will simply run forever (in theory, that is; of course any real computer would run out of memory long before that). So this doesn’t count as a proof because it doesn’t guarantee to find a tricubic sum for every number. An algorithmic proof would consist of an algorithm to find a tricubic sum for any suitable input number, along with a proof that the program will always return a tricubic sum in a finite amount of time for any input.

I don’t actually find the above scenario very likely (though a dramatic example of this kind is the recent proof that every positive integer is a sum of three palindromes). I find it more likely that someone will prove that every number not of the form 9k + \{4,5\} is tricubic, but in a way that is non-effective. That is, we would know that every suitable number must be tricubic, but the proof would not actually give us a way to find a tricubic sum for any given n. (There is a very interesting distinction here between classical and constructive mathematics; maybe I’ll write more on this in a future post!)

Posted in number theory | Tagged , , | 5 Comments