## A combinatorial proof: counting bad functions

In a previous post we derived the following expression:

$\displaystyle S = \sum_{i=0}^n (-1)^i \binom{n}{i} (k+(n-i))^n$.

We are trying to show that $S = n!$, in order to show that starting with a sequence of $n+1$ consecutive $n$th powers and repeatedly taking successive differences will always result in $n!$. Last time we also made a picture of a typical function from a set of size $n$ to a set of size $k+n$, which looks like this:

Here’s the plan of action from this point:

• Start with all the functions from a set of size $n$ to a set of size $k+n$ (i.e. all possible pictures like the above).
• Subtract off the “bad” functions which don’t correspond to matchings between the blue dots.
• We will be left with only matchings, of which we know there are $n!$.
• Find that the right expression to count the functions via this procedure is exactly $S$.
• Rejoice!

What makes a function “bad”? Based on the discussion in my previous post, it seems that there are two things that can make a function “bad”, that is, prevent it from being a matching between the blue sets:

• An arrow can point to one of the orange dots. This means there won’t be enough arrows left to cover all of the blue dots.
• Two arrows can “collide”, that is, point to the same blue dot on the right-hand side (if two arrows point to the same orange dot then it’s already bad because of the first reason). This means it can’t be a valid matching, because a matching has to match each blue dot on the left with exactly one blue dot on the right and vice versa.

The example function above has both these problems:

From another point of view, however, there is really only one thing that makes a function bad, which encompasses both points above: a function is bad if and only if there is at least one “missing” blue dot, that is, a blue dot on the right-hand side which is not pointed to by an arrow. On the one hand, if there is a missing blue dot, then the function obviously can’t be a matching, since the missing blue dot is unmatched. On the other hand, if there are no missing dots, then each dot must be matched with a unique dot on the left-hand side, since there are the same number of blue dots on both sides. (In fact, if a blue dot is missing, it will always be caused by one of the two problems listed above—there will either be an arrow pointing to an orange dot, or two arrows that collide.) Here is the same function again, but with the missing elements marked as “bad” by fading them out a bit:

Our goal is now to “subtract off” these bad functions, that is, the functions with at least one missing blue dot.

Let’s number the blue dots from $1$ to $n$ (with $1$ at the top), and define $M_i$ as the set of all functions from $n$ to $k+n$ where blue dot $i$ is missing ($M$ is for “Missing”).

So, for example, the function from before is an element of $M_3$, since element $3$ is missing:

How big is $M_i$? Well, if we just leave out the missing blue dot $i$ (and renumber the blue dots with numbers greater than $i$), we are left with a function from $n$ to $k+(n-1)$; conversely, every function from $n$ to $k+(n-1)$ can be made into a function from $n$ to $k+n$ with blue dot $i$ missing, just by inserting a new dot numbered $i$ (and again renumbering the dots with numbers $\geq i$).

So, since we can put the function in $M_i$ in one-to-one correspondence with all the functions from a set of size $n$ to a set of size $k+(n-1)$, the size of $M_i$ is just the number of such functions, that is, $(k+(n-1))^n$.

At this point, however, we run into a snag: as you can see, the example function we’ve been using is also an element of $M_1$, since $1$ is missing too. And this is why we can’t simply subtract each of the $M_i$ and call it a day: the sets overlap, so if we subtract all of them we would be subtracting functions multiple times.

Hmm, subsets that overlap, and we want to count the things that are in none of the subsets… this is starting to sound familiar! We need some PIE, of course. In my next post we’ll start putting some of the pieces together!

Posted in arithmetic, combinatorics, proof | | 1 Comment

## A combinatorial proof: functions and matchings

We’re trying to prove the following equality (see my previous post for a recap of the story so far):

$\displaystyle \sum_{i=0}^n (-1)^{n-i} \binom{n}{i} (k+i)^n = n!$

In particular we’re trying to show that the two sides of this equation correspond to two different ways to count the same set of mathematical objects. We already know that the right-hand side counts the number of permutations of a set of size $n$, or put another way, the number of one-to-one matchings between two different sets of size $n$. Somehow, then, we need to show that the left side of the equation is just a complicated way to count the same thing.

## Cosmetic surgery

Let’s call the left-hand side of the equation $S$, that is,

$\displaystyle S = \sum_{i=0}^n (-1)^{n-i} \binom{n}{i} (k+i)^n$.

The first thing I want to do is switch the direction of the summation. That is, everywhere we currently have $i$ I want to replace it with $n-i$, and vice versa. So $0$ will switch places with $n$, and $1$ will switch with $n-1$, and so on. Ultimately we will still have the same values of $i$, just in the reverse order. Of course, the order in which we add up a bunch of things doesn’t matter (since addition is commutative), so this won’t change the value of $S$ at all.

$\displaystyle S = \sum_{i=0}^n (-1)^i \binom{n}{n-i} (k+(n-i))^n$

Finally, note that $\binom{n}{n-i} = \binom{n}{i}$ (there are the same number of ways to choose $n-i$ things out of $n$ as there are to choose $i$ things to exclude), so we can write

$\displaystyle S = \sum_{i=0}^n (-1)^i \binom{n}{i} (k+(n-i))^n$.

## Functions

When $i = 0$, $S$ involves the term $(k+n)^n$. Let’s think about what that counts. I explained in a previous post that $a^b$ is the number of functions from a set of size $b$ to a set of size $a$ (as a reminder, this is because the function gets to independently choose where to map each input value—so there are $b$ choices for each of the $a$ inputs, for a total of $b \cdot b \cdot \dots \cdot b = b^a$ choices). So let’s draw a typical function from a set of size $n$ to a set of size $k+n$:

For this example I chose $n = 5$ and $k = 3$. The blue dots on the left represent the set of size $n$. The dots on the right represent the set of size $k+n$: I made $n$ of them blue to emphasize that they “correspond” to the blue dots on the left, and the remaining $k$ “extra” dots are orange.

The arrows indicate what the function does. To be a valid function, it has to be defined for each element of the input set; that is, in the picture, there has to be exactly one arrow coming out of each dot on the left. However, the arrows can go anywhere. Note that in the example above, one arrow points to an orange dot, and two other arrows point the same blue dot (we could say they “collide”).

Some of these functions are actually matchings between the blue dots on the left and right, that is, every arrow points to a blue dot, and none of the arrows collide. For example:

We already know that the right-hand side of the equation, $n!$, is the number of such matchings. And we’re going to show that the left-hand side of the equation is what you get if you start with all the possible functions from $n$ to $n+k$, and then subtract the ones that aren’t matchings. Of course, these should give you the same result! If you like, between now and my next post, you can try to work out the details for yourself.

Posted in arithmetic, combinatorics, proof | | 5 Comments

## A new tricubic sum for three!

$3 = 569936821221962380720^3 + (-569936821113563493509)^3 + (-472715493453327032)^3$

Here’s a nice Numberphile interview with Andrew Booker about the new discovery. They also talk about Hilbert’s tenth problem, undecidability, the reasons for doing computer searches like this, the role of science communication (such as Numberphile) in spurring discovery, and other things.

See my previous blog post for why this is interesting!

## Randomness is hard for humans

It is notoriously difficult for humans to come up with truly random numbers. I don’t really have any data I can point to in particular, but there are quite a few well-known phenomena that show how bad humans are at picking random numbers:

• When asked to pick a random number between 1 and 10, lots of people pick 7, presumably because it “seems the most random” (perhaps because it is prime). But of course if you always pick 7 when asked to pick a random number, it is not random at all!

• If asked to generate a sequence of outcomes from tosses of a fair coin (like Heads, Tails, Tails, Heads, …), most people will generate sequences which do not have enough strings of repeated outcomes, such as three heads in a row or four tails in a row. Such repeated sequences don’t “seem random”, but of course a truly random sequence will have them sometimes. In fact, the chance of getting four heads in a row is only $1/16$—so such sequences should actually occur pretty often.

• In fact, there are computer programs that can repeatedly guess whether you are going to pick heads or tails next, and end up being correct more than 50% of the time! It seems like the computer is somehow clairvoyant or reading your mind, but really it is just exploiting the fact that your sequence of picks is not truly random. Here’s one you can try—can you win?

## A human way to generate randomness?

I keep wondering whether there is some way to access a source of randomness as a human. As in, the next time someone asks you to pick a random number, you could use your method to pick a truly random number. Of course, you could use a computer—in fact, there are some nice apps, like this one from random.org, for generating random numbers. But I am interested in a method that can be done without aids.

You could carry around a coin with you everywhere, or some dice. But what if you have forgotten yours, or don’t have access to them? Again, I am hoping for something that can be done without any aids.

Methods for generating “truly” random numbers (as opposed to pseudorandom numbers) often sample from some noisy source and throw away all but the lowest-order information. So, for example, you could count the number of times your heart beats in one minute and decide it represents “heads” if the number is even and “tails” if the number is odd. However, one bit per minute is super slow (if you needed to pick a number from 1-10 it would take you at least four minutes). It also depends on having something to count sixty seconds for you.

Can anyone think of any better ideas? Crazy ideas are OK too. =)

Posted in computation, people | Tagged , , , | 25 Comments

## Sums of cubes: multiple representations

I’m continuing a short series of posts on representing numbers as a sum of three cubes; previous posts are 33 is the sum of three cubes and More sums of three cubes.

We now know that every number less than $100$ which is not equivalent to $(\pm 4) \pmod 9$ can be written as a sum of three cubes. But for some of them we only know of one such representation, which can involve very large numbers. Is it possible that there are yet more ways, using even larger numbers?

## Tricubic sums for 0

$0$ can of course be written as $0^3 + 0^3 + 0^3$, but in fact it can be written as a sum of three cubes in infinitely many ways: pick any number $c$, then $0 = 0^3 + c^3 + (-c)^3$. In fact, we know these are must be the only tricubic sums for $0$. For suppose there were some nonzero $x$, $y$, and $z$ such that $0 = x^3 + y^3 + z^3$. In order to get a sum of zero, one of them has to be negative and two positive, or vice versa. If two are negative, then just negate the entire equation so that two are positive and one negative. Then move the negative one (say, $z$) to the other side of the equation, giving us $x^3 + y^3 = z^3$—but this would contradict Fermat’s Last Theorem!1

## Tricubic sums for 1

$1$ can of course be written in infinitely many ways as $1^3 + c^3 + (-c)^3$. However, there are also other ways. For example, as you can easily check for yourself,

$1 = 9^3 + (-6)^3 + (-8)^3.$

In fact, for any integer value of $b$, it turns out that

$1 = (9b^4)^3+(3b-9b^4)^3+(1-9b^3)^3$.

Let’s prove it! Remember that by the Binomial Theorem, $(x+y)^3 = x^3 + 3x^2y + 3xy^2 + y^3$. There are a lot of exponents and such to keep track of, but ultimately the algebra is not hard, just somewhat tedious:

$\begin{array}{cl} & (9b^4)^3+(3b-9b^4)^3+(1-9b^3)^3 \\[1em] =& (9b^4)^3 + \big[(3b)^3 + 3 (3b)^2 (-9b^4) + 3 (3b) (-9b^4)^2 + (-9b^4)^3\big] + \big[1^3 + 3(1^2)(-9b^3) + 3(1)(-9b^3)^2 + (-9b^3)^3\big] \\[1em] =& 3^6 b^{12} + 3^3 b^3 - 3^5 b^6 + 3^6 b^9 - 3^6 b^{12} + 1 - 3^3 b^3 + 3^5 b^6 - 3^6 b^9 \\[1em] =& 1\end{array}$

If you look carefully, you’ll see that everything cancels in the last step except for the single $1$!

I got the example $1 = 9^3 + (-6)^3 + (-8)^3$ by choosing $b = 1$. As another example, if we pick $b = 47$, then we get

$1 = 43917129^3 + (-43916988)^3 + (-934406)^3$

which you can verify for yourself!

## Tricubic sums for 2

There are also infinitely many tricubic sums for $2$: for any value of $c$,

$2 = (1 + 6c^3)^3 + (1 - 6c^3)^3 + (-6c^2)^3$

I’ll let you prove this one yourself. Just expand using the Binomial Theorem and watch everything magically cancel!

## Tricubic sums for 3 and beyond

It turns out that as of this writing (September 2019), we only know of two tricubic sums for 3, both rather small:

$3 = 1^3 + 1^3 + 1^3 = 4^3 + 4^3 + (-5)^3$.

Are there any more? No one knows. However, in 1992, Roger Heath-Brown published a paper in which he conjectured that every number not of the form $\pm 4 \pmod 9$ has not just one but infinitely many tricubic sums! If true, this means there are infinitely many more tricubic sums for $3$ in particular—but if so they involve rather large numbers. So rather than tackling $114$ (the current smallest number for which we don’t know a tricubic sum), the next big prize might be to find another tricubic sum for the humble $3$. Indeed, Andrew Booker says as much in his Numberphile interview.

1. We don’t actually need to invoke Fermat’s Last Theorem in full generality—the $n=3$ case of Fermat’s Last Theorem was first proved by Euler in 1770.

Posted in number theory | | 1 Comment

## More sums of three cubes

About six months ago I wrote about the recent discovery that 33 can be written as the sum of three cubes. At that time, the only remaining number less than 100 whose status was still unknown was 42. And just recently that, too, has fallen: thanks to Andrew Booker and Andrew Sutherland, we now know that

$\displaystyle 42=(-80538738812075974)^3+80435758145817515^3+12602123297335631^3.$

Again, this was very difficult to find (it required 1.3 million hours of computer time!) but very easy to verify: you can plug the above numbers into a computer yourself and see that you get 42. In fact, I’ve set up some links here so you can try it yourself using either Wolfram Alpha or Python. (Sadly, most graphing calculators don’t have enough precision to represent these numbers exactly. But seriously, click one of the links above and try it out!)

I still don’t understand the math behind the algorithm Booker and Sutherland used (I tried reading the paper but quickly got lost, and my sense is that I would probably have to spend at least several months doing background reading to be able to understand it). But this time around I’ve been learning more about the sums of three cubes problem itself. Today I’ll write a bit about which numbers actually have a hope of being written as a sum of three cubes, and in a future post I plan to write about numbers that have multiple representations as sums of cubes.

For the rest of this post I’m going to use the word tricubic to refer to numbers which can be written as a sum of three cubes. So, for example, we now know that $42$ is tricubic. I’ll also say tricubic sum to refer to three cubes that add up to a given number, for example, “$1^3 + 2^3 + 3^3$ is a tricubic sum for $36$”.

## Which numbers are tricubic?

Some numbers definitely aren’t tricubic: for example, 13. How did I know that? The key is to look at the remainders of cubes $\pmod 9$:

$\begin{array}{rcl}0^3 &=& 0 \\[0.5em] 1^3 &=& 1 \\[0.5em] 2^3 &=& 8 \equiv (-1) \pmod 9 \\[0.5em] 3^3 &=& 27 \equiv 0 \pmod 9 \\[0.5em] 4^3 &=& 64 \equiv 1 \pmod 9 \\[0.5em] 5^3 &=& 8 \equiv (-1) \pmod 9 \\[0.5em] 6^3 &=& 216 \equiv 0 \pmod 9 \\[0.5em] 7^3 &=& 343 \equiv 1 \pmod 9 \\[0.5em] 8^3 &=& 512 \equiv (-1) \pmod 9\end{array}$

Notice that in the above table we only ever get remainders of $-1$, $0$, or $1$. And since every integer is equivalent, $\pmod 9$, to one of the numbers from $0$ through $8$, this table actually tells us everything we need to know: every integer cube will be either a multiple of 9, one more than a multiple of 9, or one less. By adding up three remainders which are all $-1$, $0$, or $1$, you can get any remainder between $-3$ and $3$, inclusive. But that leaves out exactly two remainders you can’t get: $4$ and $5$. That’s how I knew 13 is not tricubic: $13 = 9 + 4$ is four more than a multiple of $9$.

OK, so now we know that any number either 4 or 5 more than a multiple of 9 is not tricubic. What about other numbers? Maybe if we look at remainders modulo some other divisor, say, $347$, we can rule out other numbers in a similar way? Actually, no! So far it seems like this simple restriction is the only restriction on tricubic numbers: the conjecture is that every number not of the form $9k + \{4,5\}$ is tricubic—but right now this is only a conjecture; no one has yet managed to prove it!

So, in general, numbers fall into three groups:

• Numbers of the form $9k + 4$ or $9k + 5$ which we know are not tricubic: $\{4, 5, 13, 14, 22, 23, 31, 32, \dots\}$.
• Numbers which we know are tricubic, because we know of at least one tricubic sum for them: $\{0, 1, 2, 3, 6, 7, 8, \dots\}$.
• Numbers which we think are probably tricubic, but we currently don’t actually know: $\{114, 165, 390, 579, 627, \dots\}$

## Where do we go from here?

Progress on this problem could theoretically come in a lot of different ways.

First, someone could prove that there are additional numbers which are not tricubic. But as far as I can tell, everyone thinks this is pretty unlikely.

Someone could find a way to write some additional numbers as sums of three cubes. For example, the smallest number whose tricubic status we don’t currently know is $114$; someone might find a way to write $114$ as a sum of three cubes. However, the amount of computer time necessary to search for these may get prohibitively large: as I mentioned earlier, finding a tricubic sum for $42$ required 1.3 million hours of computer time. Note, however, that finding $33$ and $42$ was possible specifically because Andrew Booker came up with a better algorithm than what had been used before. So perhaps someone will come up with an even better algorithm that makes it feasible to search for larger tricubic sums. At some point, however, people will lose interest: finding bigger and bigger tricubic sums just gets boring after a while.

Someone could prove that every number not of the form $9k + \{4,5\}$ is tricubic by finding an algorithm which takes a number $n$ as input, and spits out a tricubic sum for $n$. But wait, you ask, don’t we already have that? Weren’t $33$ and $42$ found using an algorithm? Well, yes, but the search algorithm used to find tricubic sums for $33$ and $42$ was not guaranteed to find anything. You give it an upper limit for the size of the cubes involved, and it will search everything below that, but there is no guarantee that it will find what it is looking for. You could easily modify the program to run as follows: given a number $n$ for which we wish to find a tricubic sum, start by searching for tricubic sums under a certain upper bound; if nothing is found, increase the upper bound and try again. If nothing is found the second time, increase the upper bound yet again, and so on. This is called a semi-decision procedure: if a tricubic sum exists for $n$, this program will eventually find it, but if no tricubic sum exists then it will simply run forever (in theory, that is; of course any real computer would run out of memory long before that). So this doesn’t count as a proof because it doesn’t guarantee to find a tricubic sum for every number. An algorithmic proof would consist of an algorithm to find a tricubic sum for any suitable input number, along with a proof that the program will always return a tricubic sum in a finite amount of time for any input.

I don’t actually find the above scenario very likely (though a dramatic example of this kind is the recent proof that every positive integer is a sum of three palindromes). I find it more likely that someone will prove that every number not of the form $9k + \{4,5\}$ is tricubic, but in a way that is non-effective. That is, we would know that every suitable number must be tricubic, but the proof would not actually give us a way to find a tricubic sum for any given $n$. (There is a very interesting distinction here between classical and constructive mathematics; maybe I’ll write more on this in a future post!)

Posted in number theory | Tagged , , | 6 Comments

## A combinatorial proof: the story so far

In my last post I reintroduced this seemingly odd phenomenon:

Start with $n+1$ consecutive integers and raise them all to the $n$th power. Then repeatedly take pairwise differences (i.e. subtract the first from the second, and the second from the third, etc.) until you are left with only one number. That number will always be $n!$, that is $n$ factorial.

Simon Tatham left a comment on my previous post with a concise algebraic proof; perhaps I’ll write another post expanding on it. But for now I’m not interested in giving the shortest possible proof. My goal is to give a combinatorial proof, that is, a proof of the form “these two numbers must be equal because they are two different ways to count the same set of things”. Not because it is short or easy, but just because it is fun and involves some nice pictures!

So, where are we? In Differences of powers of consecutive integers, Part II, I showed that if we start out with the sequence of numbers $k^n, (k+1)^n, \dots, (k+n)^n$ and repeatedly take pairwise differences, we end up with a result that consists of adding and subtracting each $(k+i)^n$ a particular number of times. In particular, the number of times $(k+i)^n$ is added or subtracted is exactly the $i$th entry on the $n$th row of Pascal’s triangle, that is, $\binom{n}{i}$. Also, they alternate being added or subtracted. Putting this together into formal notation, we end up with

$\displaystyle \sum_{i=0}^n (-1)^{n-i} \binom{n}{i}(k+i)^n$

See the above-linked post for an explanation of how Pascal’s triangle enters into it. In any case, our goal is to show this sum is actually equal to $n!$.

In Combinatorial proofs I explained the idea of a combinatorial proof in general, and gave a simple example involving binomial coefficients.

Then, in Making our equation count, I explained how different parts of this expression can be interpreted as counting something:

• Factorial, $n!$, counts the number of permutations of $n$ things, or, from another point of view, the number of 1-1 matchings between two sets of $n$ things.

• A binomial coefficient $\binom{n}{i}$ is the number of ways to choose exactly $i$ out of $n$ things.

• Exponentiation, $a^b$, counts the number of functions from a set of size $b$ to a set of size $a$. For example, here are the $3^3 = 27$ different functions from a set of size 3 to another set of size 3.

• Finally, that $(-1)^{n-i}$ thing should look sort of familiar—any time you see an alternating sum like this, you should think of the Principle of Inclusion-Exclusion (PIE). More on this soon!

Posted in arithmetic, combinatorics, proof | Tagged , , , | 1 Comment