Origami stellated icosahedron!

Continuing what I started in December, I finally finished making a stellated icosahedron out of 30 Sonobe units. Each Sonobe unit corresponds to an edge of the icosahedron, and interlocks with three others to give the whole thing a remarkable degree of rigidity, even though it’s made completely out of paper.

Here are the requisite photos, showing first fivefold symmetry:

…threefold symmetry:

…and twofold:

Unlike with the cube and octahedron I made out of Sonobe units, where I just put colors together more or less randomly (only ensuring that no two touching units were the same color), with this icosahedron I very carefully researched a color scheme with some really nice mathematical properties. I’ll explain that in my next post!

Posted in geometry, group theory | Tagged , , , , , | 3 Comments

Post without words #5, explained

If you stared for a while at the images in my previous post, you probably noticed some patterns, and maybe you even figured out some sort of rule or algorithm behind them. Commenter Yammatak expressed it as “You split it into 4 and put 2 copies of the original on the bottom with the colors inverted and 2 copies on the top but rotated 90 in opposite directions.” Like this:

Yammatak is right: iterating this process does generate exactly the images in my post. As I noted in my response to Yammatak’s comment, however, this is not how I made the images! I only noticed that they could be described using this simple rule after making them.

So, how did I make them? First, I started with the Prouhet-Thue-Morse sequence, which can be defined by

\begin{array}{rcl} T_0 & = & 0 \\ T_n & = & T_{n-1},\overline{T_{n-1}} \end{array}

where the comma denotes concatenation of sequences, and the overbar means to swap zero for one and vice versa. So,

\begin{array}{rcl} T_0 & = & 0 \\ T_1 & = & T_0,\overline{T_0} = 0,1 \\ T_2 & = & T_1,\overline{T_1} = 0,1,1,0 \\ T_3 & = & 0,1,1,0,1,0,0,1 \\ T_4 & = & 0,1,1,0,1,0,0,1,1,0,0,1,0,1,1,0\end{array}

and so on. This is a really fascinating sequence that shows up all over the place—for more, see Matt Parker’s recent video, or, if you are more academically inclined, this paper by Allouche and Shallit.

In any case, replace each zero with a light blue square, and each 1 with a dark blue square:

Now, take the Hilbert space-filling curve:

and string out the Prouhet-Thue-Morse sequence along it, using light and dark blue squares in place of zeros and ones:

This works out really nicely because both the Prouhet-Thue-Morse sequence and the Hilbert curve naturally organize things in powers of 2. If you think about the recurrence for the Prouhet-Thue-Morse sequence, in terms of replicating and inverting, and the recurrence for the Hilbert curve, in terms of scaled and rotated copies, you can see why you end up with a simple rule like the one Yammatak explained. So in the end, I guess you could say this is a complicated way to describe a simple rule that generates a complicated image!

Posted in pattern, pictures, posts without words, sequences, solutions | Tagged , , , , , | 4 Comments

Post without words #5

Posted in challenges, pattern, pictures, posts without words | 4 Comments

The chocolate bar game: variants

Remember the chocolate bar game? Today I want to talk about some variants. Recall that the losing positions for the chocolate bar game can be visualized like this:

What if we specify that you win if you end with a 1\times 1 piece of chocolate, instead of losing? Well, that game can be visualized like this:

As you can see, it actually has a lot of the same structure, but it’s as though some rows and columns got shuffled around. Can you explain this?

And here’s what happens if we keep the rules of the original chocolate bar game, but arbitrarily declare that 1 \times 2 is also a losing position (though not 2 \times 1). As I suspected, overall we can still see the same sort of structure of radiating diagonal lines, but with some extra complication thrown in. The result is of course not symmetric.

Zooming out, we can see sequences of dots lying above lines with corresponding dots missing—but not at a fixed distance; the dots seem to come in groups of three that are the same distance away from the line, and evenly spaced from each other; and then the next three are farther away from the line and spaced farther apart, and so on. If true, this is actually very surprising to me; I have no idea where the number 3 would be coming from!

And here’s what we get if we instead arbitrarily declare 13 \times 15 to be a losing position. It looks normal up to a point, but then the extra losing position introduces a disturbance which ripples outwards.

Here is another view of the same game (with losing position 13 \times 15 added), zoomed out twice as far. It looks like it settles into a regular pattern, with the same overall shape as the original game but with some “fuzz”. The fuzz seems to follow some very interesting patterns—zooming even farther out is probably necessary to get a better sense for them!

Interestingly, if we add 13 \times 15 along with its mirror image 15 \times 13, the disturbance becomes more modest.

In this zoomed-out view of the same game, you can see that there will be periodic disturbances, occuring at exponentially increasing distances.

Unlike the chocolate bar game, I have no idea of the right strategy for any of these variants! But I think at least some of them—particularly the reversed form where 1 \times 1 is winning rather than losing—probably have a nice mathematical characterization. I would also really love to see an explanation of why dots in groups of 3 show up when we introduce 1 \times 2 as an extra losing position. Thoughts, comments, etc. welcome! I’m also happy to generate visualizations for other variants you might want to see.

Posted in games, pattern, pictures | Tagged , , , , , | Leave a comment

M74207281 is prime!

I’m a bit late to the party, but the Great Internet Mersenne Prime Search has recently announced a newly verified prime number, M_{74207281} = 2^{74207281} - 1, with a whopping 22,338,618 decimal digits! This is now the largest known prime number (though of course there are infinitely many larger ones). For some perspective, the King James Version of the bible has only just over 3 million letters, one-seventh as many as the number of digits in the new prime. For more perspective on how big this number is, check out this Numberphile video, in which Matt Parker actually gets the whole thing printed out… in 3 volumes… with a really tiny font!

Apparently a computer found the prime on my son’s fourth birthday, September 17, 2015, but the automatic notification system failed and no human noticed it until almost 4 months later, on January 7! This raises some interesting “if a tree falls in a forest”-type questions, but in any case, the tradition is that the official date of discovery of a new prime is when a human first sees it.

Head over to The Aperiodical for more info, including links to some fun videos. You can also read the official press release.

I’ve previously reported on a few other announcements of newly discovered Mersenne primes by GIMPS: here, here, here, and here, though it seems I also missed a couple. I also wrote a 30-part series in November, beginning here, on the math behind the Lucas-Lehmer test which is used to find prime Mersenne numbers.

Posted in computation | Tagged , , , , | Leave a comment

The chocolate bar game: losing positions proved

In my last post I claimed that the losing positions for the chocolate bar game are precisely those of the form (x_2, x111\dots 1_2) (or the reverse), that is, in binary, positions where one coordinate is the same as the other with any number of 1 bits appended. So things like (1010, 1010), (1010, 10101), (1010, 101011111), (10101111, 1010), and so on. Let’s prove it!

First, some notation: let’s denote a length-n string of 1 bits by 1^n. (There won’t be any exponentiation in this post so there shouldn’t be any confusion.) Note that 1^0 denotes the empty sequence of bits. Also, if x and y are sequences of bits, we will write xy to denote their concatenation, that is, the sequence of bits x followed by the sequence of bits y. For example, if x = 10010 and y = 1101, then xy = 100101101. (Again, we won’t be doing any multiplication so there should be no confusion.)

Given this notation, we can restate the characterization of losing positions more concisely: the claim is that losing positions are exactly those of the form (x,x1^n) or (x1^n, x) where x is any sequence of bits and n \geq 0 is any natural number. Let’s call positions of this form 1-padded.

We have two things to prove: first, we have to prove that from every position which is not 1-padded, there exists some valid move to a 1-padded position. And dually, we also have to prove that from every 1-padded position, all possible moves lead to a non-1-padded position—that is, there are no valid moves from one 1-padded position to another.1

For the first part, let (x,y) be a non-1-padded position. Assume that x > y. (Note we can’t have x = y since that would make the position 1-padded, with n = 0; and it’s enough to consider the case x > y since the same argument will apply to the other case where x < y, by symmetry.) Let n be the unique natural number such that y1^n < x < y1^{n+1}. (These inequalities are both strict since the position is not 1-padded.) Then I claim that (x,y) \to (y1^n, y) is a legal chocolate bar game move. x is at most one less than y1^{n+1}, that is, x \leq y1^n0. But y1^n0 is exactly twice y1^n (adding a zero to the end of a binary number multiplies it by two, just as adding a zero to the end of a decimal number multiplies it by ten). So y1^n is equal to or greater than half of x, and hence a valid move.

Now we must show that from any 1-padded position, the only valid moves are to non-1-padded positions. Consider the position (x1^n, x). (Again, this suffices since the same argument will apply symmetrically to the case (x,x1^n).) There are two types of moves one could make from this position: one could decrease x1^n, or one could decrease x. First, consider decreasing x1^n, and suppose n \geq 1. Since x stays the same, in order to move to another 1-padded position, we must decrease x1^n into something of the form x1^{n'} for some n' < n. That means, at the very least, truncating a 1 from the end of x1^n—but this is not a legal move, since truncating a 1 from the end of a binary number decreases it by more than half (in particular it is half rounded down).

So now consider decreasing x (note the following argument works even when n = 0). In order to reach a 1-padded position by decreasing x, we have to end up with something of the form (x', x'1^{n'}) where x' < x. But if we decrease x we cannot change x1^n, so in fact x1^n = x'1^{n'}. That means x' has to be a proper prefix of x, and also x has to end in 1. But then to decrease x to x' we have to at least truncate a 1 from the end of x, which we know is not allowed.

Note, finally, that the ultimate losing position (1,1) is trivially 1-padded. So we have shown that if you’re in a 1-padded position, you are doomed: every possible move you can make leaves a position which is not 1-padded, from which we also showed that your opponent can make a move that leaves you back in another 1-padded position. Eventually, you will be left with (1,1) and you will lose.

So, to play this game perfectly, you just have to be able to convert numbers into binary in your head. If the position isn’t 1-padded, you can always play to decrease the bigger number so it is equal to the smaller number padded by a bunch of 1s. If you have to play in a 1-padded position for some reason, just play a small move and hope your opponent hasn’t read this blog post.


  1. Technically, we have to prove these at the same time, by simultaneous induction on, say, the sum x+y, but I won’t bother with the picky details.

Posted in games, pattern, pictures, proof, solutions | Tagged , , , , , , , , | Leave a comment

The chocolate bar game: losing positions characterized

The evident pattern from my last post is that losing positions in the chocolate bar game appear to be characterized by those (x,y) where the binary expansion of y is the same as the binary expansion of x with any number (including zero) of 1 bits appended to the end (or vice versa): for example, (101, 101), (101, 1011), (101, 10111), and so on.

Appending k ones to the end of the binary expansion of x corresponds to first multiplying x by 2^k (which shifts it left by k places, that is, adds k zeros on the end), and then adding 2^k - 1, which consists of k ones in binary. In other words, losing positions where y \geq x correspond to integer points satisfying

y = 2^k x + 2^k - 1

This is the equation of a line with slope 2^k and with a y-intercept of 2^k - 1. This explains the visual pattern of radiating lines, where the slope of each line is double the previous. It also explains how the lines don’t go through the origin (except for the main diagonal).

If we move 1 to the other side and factor out 2^k, we can also rearrange the above equation as

y + 1 = 2^k(x + 1)

which looks much cleaner; however, I find it much harder to work with. (Though one interesting thing it does show is that although the lines don’t go through the origin, strangely enough they do all go through the point (-1,-1).) In my next post, I will prove that these really are the losing positions, and I’ll stick to the characterization in terms of binary expansions.

Posted in challenges, games, pattern, pictures | Tagged , , , , , , , , | 1 Comment