If you stared for a while at the images in my previous post, you probably noticed some patterns, and maybe you even figured out some sort of rule or algorithm behind them. Commenter Yammatak expressed it as “You split it into 4 and put 2 copies of the original on the bottom with the colors inverted and 2 copies on the top but rotated 90 in opposite directions.” Like this:
Yammatak is right: iterating this process does generate exactly the images in my post. As I noted in my response to Yammatak’s comment, however, this is not how I made the images! I only noticed that they could be described using this simple rule after making them.
So, how did I make them? First, I started with the Prouhet-Thue-Morse sequence, which can be defined by
where the comma denotes concatenation of sequences, and the overbar means to swap zero for one and vice versa. So,
and so on. This is a really fascinating sequence that shows up all over the place—for more, see Matt Parker’s recent video, or, if you are more academically inclined, this paper by Allouche and Shallit.
In any case, replace each zero with a light blue square, and each 1 with a dark blue square:
Now, take the Hilbert space-filling curve:
and string out the Prouhet-Thue-Morse sequence along it, using light and dark blue squares in place of zeros and ones:
This works out really nicely because both the Prouhet-Thue-Morse sequence and the Hilbert curve naturally organize things in powers of 2. If you think about the recurrence for the Prouhet-Thue-Morse sequence, in terms of replicating and inverting, and the recurrence for the Hilbert curve, in terms of scaled and rotated copies, you can see why you end up with a simple rule like the one Yammatak explained. So in the end, I guess you could say this is a complicated way to describe a simple rule that generates a complicated image!
What if we specify that you win if you end with a piece of chocolate, instead of losing? Well, that game can be visualized like this:
As you can see, it actually has a lot of the same structure, but it’s as though some rows and columns got shuffled around. Can you explain this?
And here’s what happens if we keep the rules of the original chocolate bar game, but arbitrarily declare that is also a losing position (though not ). As I suspected, overall we can still see the same sort of structure of radiating diagonal lines, but with some extra complication thrown in. The result is of course not symmetric.
Zooming out, we can see sequences of dots lying above lines with corresponding dots missing—but not at a fixed distance; the dots seem to come in groups of three that are the same distance away from the line, and evenly spaced from each other; and then the next three are farther away from the line and spaced farther apart, and so on. If true, this is actually very surprising to me; I have no idea where the number would be coming from!
And here’s what we get if we instead arbitrarily declare to be a losing position. It looks normal up to a point, but then the extra losing position introduces a disturbance which ripples outwards.
Here is another view of the same game (with losing position added), zoomed out twice as far. It looks like it settles into a regular pattern, with the same overall shape as the original game but with some “fuzz”. The fuzz seems to follow some very interesting patterns—zooming even farther out is probably necessary to get a better sense for them!
Interestingly, if we add along with its mirror image , the disturbance becomes more modest.
In this zoomed-out view of the same game, you can see that there will be periodic disturbances, occuring at exponentially increasing distances.
Unlike the chocolate bar game, I have no idea of the right strategy for any of these variants! But I think at least some of them—particularly the reversed form where is winning rather than losing—probably have a nice mathematical characterization. I would also really love to see an explanation of why dots in groups of show up when we introduce as an extra losing position. Thoughts, comments, etc. welcome! I’m also happy to generate visualizations for other variants you might want to see.
Apparently a computer found the prime on my son’s fourth birthday, September 17, 2015, but the automatic notification system failed and no human noticed it until almost 4 months later, on January 7! This raises some interesting “if a tree falls in a forest”-type questions, but in any case, the tradition is that the official date of discovery of a new prime is when a human first sees it.
Head over to The Aperiodical for more info, including links to some fun videos. You can also read the official press release.
I’ve previously reported on a few other announcements of newly discovered Mersenne primes by GIMPS: here, here, here, and here, though it seems I also missed a couple. I also wrote a 30-part series in November, beginning here, on the math behind the Lucas-Lehmer test which is used to find prime Mersenne numbers.
First, some notation: let’s denote a length- string of bits by . (There won’t be any exponentiation in this post so there shouldn’t be any confusion.) Note that denotes the empty sequence of bits. Also, if and are sequences of bits, we will write to denote their concatenation, that is, the sequence of bits followed by the sequence of bits . For example, if and , then . (Again, we won’t be doing any multiplication so there should be no confusion.)
Given this notation, we can restate the characterization of losing positions more concisely: the claim is that losing positions are exactly those of the form or where is any sequence of bits and is any natural number. Let’s call positions of this form -padded.
We have two things to prove: first, we have to prove that from every position which is not -padded, there exists some valid move to a -padded position. And dually, we also have to prove that from every -padded position, all possible moves lead to a non--padded position—that is, there are no valid moves from one -padded position to another.^{1}
For the first part, let be a non--padded position. Assume that . (Note we can’t have since that would make the position -padded, with ; and it’s enough to consider the case since the same argument will apply to the other case where , by symmetry.) Let be the unique natural number such that . (These inequalities are both strict since the position is not -padded.) Then I claim that is a legal chocolate bar game move. is at most one less than , that is, . But is exactly twice (adding a zero to the end of a binary number multiplies it by two, just as adding a zero to the end of a decimal number multiplies it by ten). So is equal to or greater than half of , and hence a valid move.
Now we must show that from any -padded position, the only valid moves are to non--padded positions. Consider the position . (Again, this suffices since the same argument will apply symmetrically to the case .) There are two types of moves one could make from this position: one could decrease , or one could decrease . First, consider decreasing , and suppose . Since stays the same, in order to move to another -padded position, we must decrease into something of the form for some . That means, at the very least, truncating a from the end of —but this is not a legal move, since truncating a from the end of a binary number decreases it by more than half (in particular it is half rounded down).
So now consider decreasing (note the following argument works even when ). In order to reach a -padded position by decreasing , we have to end up with something of the form where . But if we decrease we cannot change , so in fact . That means has to be a proper prefix of , and also has to end in . But then to decrease to we have to at least truncate a from the end of , which we know is not allowed.
Note, finally, that the ultimate losing position is trivially -padded. So we have shown that if you’re in a -padded position, you are doomed: every possible move you can make leaves a position which is not -padded, from which we also showed that your opponent can make a move that leaves you back in another -padded position. Eventually, you will be left with and you will lose.
So, to play this game perfectly, you just have to be able to convert numbers into binary in your head. If the position isn’t -padded, you can always play to decrease the bigger number so it is equal to the smaller number padded by a bunch of s. If you have to play in a -padded position for some reason, just play a small move and hope your opponent hasn’t read this blog post.
Technically, we have to prove these at the same time, by simultaneous induction on, say, the sum , but I won’t bother with the picky details.↩
Appending ones to the end of the binary expansion of corresponds to first multiplying by (which shifts it left by places, that is, adds zeros on the end), and then adding , which consists of ones in binary. In other words, losing positions where correspond to integer points satisfying
This is the equation of a line with slope and with a -intercept of . This explains the visual pattern of radiating lines, where the slope of each line is double the previous. It also explains how the lines don’t go through the origin (except for the main diagonal).
If we move to the other side and factor out , we can also rearrange the above equation as
which looks much cleaner; however, I find it much harder to work with. (Though one interesting thing it does show is that although the lines don’t go through the origin, strangely enough they do all go through the point .) In my next post, I will prove that these really are the losing positions, and I’ll stick to the characterization in terms of binary expansions.
Here’s a list of some losing positions on or above the main diagonal (dark blue squares in the above picture), ordered by -coordinate, along with their binary representations. Since the game is symmetric, if is a losing position then so is . What patterns do you notice? Can you connect them to the above visualization?
x | y | x (binary) | y (binary) |
---|---|---|---|
1 | 1 | 1 | 1 |
1 | 3 | 1 | 11 |
1 | 7 | 1 | 111 |
1 | 15 | 1 | 1111 |
1 | 31 | 1 | 11111 |
2 | 2 | 10 | 10 |
2 | 5 | 10 | 101 |
2 | 11 | 10 | 1011 |
2 | 23 | 10 | 10111 |
2 | 47 | 10 | 101111 |
3 | 3 | 11 | 11 |
3 | 7 | 11 | 111 |
3 | 15 | 11 | 1111 |
3 | 31 | 11 | 11111 |
4 | 4 | 100 | 100 |
4 | 9 | 100 | 1001 |
4 | 19 | 100 | 10011 |
4 | 39 | 100 | 100111 |
5 | 5 | 101 | 101 |
5 | 11 | 101 | 1011 |
5 | 23 | 101 | 10111 |
6 | 6 | 110 | 110 |
6 | 13 | 110 | 1101 |
6 | 27 | 110 | 11011 |
7 | 7 | 111 | 111 |
7 | 15 | 111 | 1111 |
7 | 31 | 111 | 11111 |
8 | 8 | 1000 | 1000 |
8 | 17 | 1000 | 10001 |
8 | 35 | 1000 | 100011 |
9 | 9 | 1001 | 1001 |
9 | 19 | 1001 | 10011 |
9 | 39 | 1001 | 100111 |
10 | 10 | 1010 | 1010 |
10 | 21 | 1010 | 10101 |
11 | 11 | 1011 | 1011 |
11 | 23 | 1011 | 10111 |
12 | 12 | 1100 | 1100 |
12 | 25 | 1100 | 11001 |
13 | 13 | 1101 | 1101 |
13 | 27 | 1101 | 11011 |
14 | 14 | 1110 | 1110 |
14 | 29 | 1110 | 11101 |
15 | 15 | 1111 | 1111 |
15 | 31 | 1111 | 11111 |
16 | 16 | 10000 | 10000 |
16 | 33 | 10000 | 100001 |
17 | 17 | 10001 | 10001 |
18 | 18 | 10010 | 10010 |
19 | 19 | 10011 | 10011 |
20 | 20 | 10100 | 10100 |
21 | 21 | 10101 | 10101 |
22 | 22 | 10110 | 10110 |
23 | 23 | 10111 | 10111 |
24 | 24 | 11000 | 11000 |
25 | 25 | 11001 | 11001 |
Two players take turns. On a player’s turn, she must break the chocolate bar along any one of the horizontal or vertical lines, and eat the smaller piece (eating the bigger piece would be very rude). [Edited to add: if the pieces are equal, they may eat either piece.] The player who is left with a piece of chocolate, and hence cannot make another move, loses the game. (At least they get to eat the last piece of chocolate.) For example, given the above bar of chocolate, the first player has eight possible moves: she could break it along any one of the 5 vertical lines, or along any of the 3 horizontal lines. However, since she always has to eat the smaller half, some of these moves are really the same. For example, it does not make a difference whether she breaks off and eats the four leftmost squares or the four rightmost squares; in either case her opponent will be left with a bar of chocolate.
Here is an equivalent formulation of the same game: there is a pet store with dogs and cats. Players alternate turns. On a player’s turn, she must decide to buy either some cats or some dogs; once she makes her choice she may buy any number up to, but not exceeding, half the pet store’s inventory of that animal. For example, if the store has 3 cats and 6 dogs, then a player may buy 1 cat, or they may buy 1, 2, or 3 dogs. The loser is whoever’s turn it is when the pet store has only one cat and one dog remaining (since then they are not allowed to buy any).
First, explain why these games are the same.
I actually wrote about this a long time ago (I’m finally getting back around to following up!). In that post I explained how we can visualize the game like this:
(If some of the grid lines appear heavier than others, pay no attention; it is just a rendering artifact.) The square with position (the bottom left is ) represents the game state with an rectangle, or a store with cats and dogs. A square is dark blue if it is a losing position, that is, whoever’s turn it is in this position is going to lose (assuming both players play optimally; practially speaking, of course, a player in a losing position could still win if their opponent makes a bad move). Losing positions are those from which any valid move leads to a winning position (that is, you lose if no matter what you do, your opponent will win). The light blue squares are winning positions: a position is winning if there is at least one valid move to a losing position (that is, you win if there is a move you can make which forces your opponent to lose).
The losing positions in this visualization of the chocolate bar game appear to be arranged in lines whose slope are a (positive or negative) power of two. For example, the central diagonal has a slope of 1; the next line down has a slope of ; the next, ; and so on.
Now, can you explain why the picture looks like this?
Specifically:
Can you come up with a relatively simple way to characterize which positions are losing positions and which are winning positions? (Hint: think about the positions in binary.)
Use your characterization to devise a strategy for winning the game. Amaze all your friends.
Can you prove it? That is, prove that from any losing position (according to your characterization above) the only valid moves are to winning positions, and that from any winning position there always exists at least one valid move to a losing position.
Apparently this problem comes from the 2005 International Olympiad in Informatics. There is a full solution on that site, but it is much too complicated! Of course I will reveal my own solutions in some future posts. Until then, happy solving. Feel free to post comments, questions, partial or full solutions, etc., in the comments (so conversely, don’t look at the comments if you don’t want any spoilers!).
In a comment on my previous post, blasepascal2014 gave a fairly simple algorithm for reading all the sides of a stack of triple-sided paper, though it requires making some marks on the paper, and requires you to do something different when you encountered a mark again. It does not, of course, go through the sides in order, but that’s fine. In reflecting on that solution, I realized that you can do something similar but even simpler. Here is my algorithm:
That’s the entire algorithm! You just do the same operation every time. If you start with a stack like this: then after doing this once you have , after doing it twice you have , and so on. After rounds, is back on top but all the sheets are flipped, and you have read the front sides of all the sheets. After another rounds, you have read all the flip sides, and the sheets are now all anti-flipped (that is, double-flipped); after yet another rounds you have read all the sides and the stack is back in its original state (ready for someone else to read it, I suppose).
At this point I had two additional insights:
Concretely, here is what I mean. Normally, we print things double-sided like this:
This is natural for a number of reasons: the page numbers progress as you progress physically through the paper; it’s the way you would print pages for a book; and so on. But this is not a book, it is a stack of loose paper! Here’s my alternative proposal, shown for a stack of 4 sheets with eight pages:
When you are finished reading page 1, you flip the sheet over (so now page 5 is on the front) and put it on the bottom of the stack. Now the stack looks like this:
Notice how page 5 has taken its place behind page 4. After you read page 2, you do the same operation a second time, resulting in
Doing the operation 8 times in a row, you read all the pages in order, and the stack ends up restored to its original position. Sweet!
What if there are an odd number of pages? That’s fine, just leave the last side blank, like this:
This is not just silly theoretical nonsense, I really think this is a superior way to print and read double-sided documents.
There are downsides, though. One is that it’s hard to jump ahead in the document by more than one page at a time. With the traditional printing method, you can jump ahead just by moving a whole bunch of pages to the bottom of the stack all at once. With my method, you have to actually move the pages one at a time.
So how does one actually print a document like this? I’m glad you asked! I know of two ways, one low-tech and one high-tech.
The low-tech way is this: first, print the first half (rounded up) of the document single-sided. Then take the resulting stack of paper and put it back in the printer, with page 1 facing up and the top towards you. Then print the second half of the document single-sided. It just so happens that this works out nicely with most printers, so that you don’t need to reverse all the sheets before putting them back in the printer. This method is a little involved, but not too hard, and you can do it using tools you already have.
The high-tech way is to reorder the pages of your PDF before printing. Using the pdftk
tool, I wrote a little unix script which does this: it takes a PDF as input, and outputs a new PDF with the pages rearranged in the right order. (For lack of a better term the script is called spliff
. I’m open to other suggestions.) Then all you have to do is print the resulting PDF double-sided. I actually tried it and it works great. It felt really magical holding a phyiscal stack of 6 sheets of paper, doing the same operation 12 times in a row, and progressing through all 12 pages in order!
Feel free to use my script (besides pdftk
it requires zsh
), or write your own! Also, if you use this method of printing/reading double-sided, I’d love to hear about your experience, ideas for improvement, or alternatives!
So here is a challenge for you. Can you come up with a good algorithm to read a stack of triple-sided sheets of paper? You may not be familiar with triple-sided sheets of paper, so here’s how they work: they stack nicely, just like regular sheets of paper, but you have to flip one over three times before you get back to the original side. In particular, there are two different operations you can do: call them flip (say, left-right) and anti-flip (right-left). On two-sided paper, flip and anti-flip are the same. On three-sided sheets, two flips are the same as an anti-flip, and two anti-flips are the same as a flip. Of course, three flips, or three anti-flips, are both the same as doing nothing. In other words, the operations of doing nothing, flip, and anti-flip form a group (namely, the group ).
If a sheet of paper is denoted by a variable , we will use to denote the flipped version of , and to denote the anti-flipped version. So and , and so on.
Given a stack of three-sided sheets of paper , where is on top and is on the bottom, there are a number of operations you can do:
Of course, you can also do any sequence of such operations.
The goal is to read every side of every sheet: that is, to come up with a sequence of the above operations such that all sides show up on top of the stack. Some desirable criteria for a solution include:
It may not be possible to achieve all of the above at once! In the case of the algorithm for reading two-sided sheets of paper, items (2) and (3) above were taken as given requirements, and the name of the game was to solve (1): in particular, the solutions all exploited the phyiscal nature of the paper in order to keep track of one’s place in an alternating sequence of operations, which would otherwise be somewhat difficult to keep track of. For three-sided sheets, I am interested in exploring other possibilities as well—for example, perhaps there is a single operation or sequence of operations that can be done every time, such that all the pages show up, though not in order.
Also, perhaps the double-ended nature of a stack is particularly well-suited to reading double-sided paper. Can you come up with a coherent notion of a triple-ended stack which makes it easy to read triple-sided paper?
Also also, are there any reasonable physical models of triple-sided paper? Perhaps something with flexagons?