In my previous post I conjectured that the only tied games are the ones that can be built up by surrounding a tied game with a coin () or concatenating two tied games (). That conjecture was quickly shot down, with commenters tinyboss and Thibault Vroonhove independently pointing out counterexamples: or are both tied, even though they can’t be built up in the conjectured way. As you can check, with best play in these games, one player will get and the other will get .
Thibault noted that these particular tied games can be made by starting with a symmetric game (like ) and then adding the same number to all the coins in one half (e.g. adding to each coin in the second half of yields ). However, although I think this always yields a tied game for games of length 4, it turns out that this does not always result in a tied game in general: for example, isn’t tied, since the coins add up to an odd total.
Thibault then came up with a counterexample to my other conjecture, namely, that is tied if and both are: is not tied, even though it is the concatenation of two tied games. In fact, even is not tied: Alice can guarantee that she will get at least three of the s.
Given all this it’s kind of amazing that the big game I constructed as a quick test of my conjectures, namely , is indeed tied. Eric Burgess reported that in fact every cyclic permutation of this game is tied!
Eric also introduced the function , where denotes the winning margin, with best play, of whoever’s turn it is. That is, starting from position , with best play, what is the difference between the current player’s score and the other player’s? (Note that in the case of Bob the “winning margin” can be negative, if Bob is going to lose even with best play.) If the current player takes coin , then denotes the winning margin of the other player from that point on, which is the same as the losing margin of the current player. But they also got ; so overall their winning margin will be . Of course, best play means that the current player will choose whichever coin gives them the higher winning margin. This motivates the following inductive definition of , where we use to denote the maximum of and :
Thibault conjectured that the following strategy for Alice is optimal (I’ve rephrased it in a way that I’m pretty sure is equivalent, and which I think makes it more clear):
If the two end coins have unequal values, and the larger of the two is also larger than its neighbor, take it.
Otherwise, look at the alternating totals (i.e. the total of the “red” coins and the total of the “blue” coins). Take the end coin corresponding to whichever group has the higher total.
If neither of these rules apply then it doesn’t matter which coin Alice takes.
If this strategy is indeed optimal, that would be really cool, since it is pretty easy to apply in practice. But I am not ready to say whether I believe this conjecture or not. In any case it should be possible to test this strategy with a computer program to play lots of random games, which I hope to do soon (and to write another post about it).
Color the coins alternately red and blue, like this:
Note that if she wants, Alice can play so as to take all the red coins and force Bob to take all the blue coins, or vice versa. For example, if she wants all the red coins, she starts by taking the red coin on the end, leaving Bob with no choice but to take a blue coin. Bob’s choice then exposes another red coin for Alice to take, leaving Bob with only blue coins to choose from, and so on. Symmetrically, Alice could always take the one available blue coin on her turn, leaving Bob with no choice but to take all the red coins.
So, if the sums of the red and blue coins are different, Alice can guarantee a win simply by choosing the set of coins with the higher total. If the red and blue coins have equal sums, Alice can at least guarantee a tie by arbitrarily choosing one set or the other.
Let’s call this strategy (taking either all the coins in even positions or all the coins in odd positions, whichever has the higher total) the “red/blue strategy”. This strategy constitutes a really elegant proof that Alice can never lose. But it’s not at all the end of the story—in particular, the red/blue strategy is not necessarily Alice’s best strategy! For example, consider this setup:
Notice that the “red” and “blue” coins have equal sums: . So if Alice just plays according to the red/blue strategy the result will be a tie. But Alice can do better: she should first take the 3, leaving Bob no choice but to take one of the 2s. But now there is a 1 and a 2 left, and Alice should obviously take the 2 (whereas the red/blue strategy would tell her to take the 1). So with best play Alice wins by a score of 5 to Bob’s 3.
This raises the natural question: when is a tie Alice’s best result? In other words, what sort of setups force a tie if both players play their best? Let’s call such setups “tied”.
As a first example, it is not hard to prove that symmetric games are tied. By a symmetric game I mean a game which stays the same when the order of the coins is reversed, i.e. where . For example,
Theorem. Symmetric games are tied.
Proof. Bob can force a tie by always playing on the opposite side from Alice: that is, if Alice takes then Bob takes , which has the same value by assumption. Since we already know that Alice will at least get a tie with best play, this must be the best result for both players (in particular, we know Bob cannot do any better with a different strategy).
However, this theorem does not completely characterize tied games. For example, the game
is also tied, even though it is not symmetric: each player will get 3 (if Alice starts by taking the 2, Bob takes the other 2, and then they split the 1s; if Alice starts by taking the 1, then it doesn’t matter what Bob takes next; on her next turn Alice will take a 2).
Can we completely characterize which games are tied? I will close by stating an observation, a theorem, and a few conjectures.
Observation. The empty game (consisting of zero coins) is tied, since after zero moves both players end with a value of zero. (This one is not really worth calling a theorem!)
Theorem. If is tied, then so is , the game obtained from by adding two copies of the coin , one on each end.
Proof. Bob can force a tie: after Alice takes one copy of , Bob takes the other one, and now the game is reduced to , which is tied by assumption.
Note that all symmetric games can be built up in this way, by starting with the empty game and successively adding “layers” that look like . So this observation and theorem together constitute an alternate, inductive proof that all symmetric games are tied.
If and are games (that is, sequences of an even number of coins), let denote the game obtained by concatenating the sequences of coins. For example, if and then .
Conjecture. If and are both tied, then so is .
For example, we know and are both tied since they are symmetric. This conjecture then claims that the game is tied as well (which happens to be true in this case, as you can check).
As a more interesting example, the foregoing theorem and conjecture (if true) could together be used to show that is tied (I’m tired of writing the commas in between single-digit coin values), because this game can be broken down as . And guess what, I have verified computationally that this game is indeed tied! (I’ll discuss how to do this in a future post.)
Conjecture. This completely characterizes tied games: a game is tied if and only if it is inductively built up, starting from the empty game, by applications of the foregoing theorem and conjecture.
I believe this conjecture less than I believe the first one. There might be other strange ways to build tied games.
There is a row of coins on the table; each coin can have any positive integer value. Two players alternate turns. On a player’s turn she must take one of the two coins on either end of the row of remaining coins, so with each turn the row gets shorter by one. After all the coins have been taken, the player with the higher total value is the winner.
Let’s look at an example. Suppose the coins start out like this:
The first player (let’s call her Alice) can choose either the 3 or the 2. Let’s say she takes the 3. (3 is bigger than 2, right?) Now the table looks like this:
The second player (let’s call him Bob) is now allowed to take either the 5 or the 2. His best move is clearly to take the 5, since it’s even bigger than the other two remaining coins combined. After that the table looks like this:
Finally Alice takes the 2, and Bob takes the 1. Final score: Alice 5, Bob 6. Bob wins!
…except it turns out, as you may have already noticed, that Alice’s first move wasn’t her best choice! It’s very easy to come up with examples, like this one, which show that the obvious “greedy” strategy—namely, always take the biggest coin available to you—is not the best strategy for this game.
So, what Alice should have done is start by taking the 2, leaving this:
Whoever gets the 5 is going to win, because even the 5 combined with the smallest coin, 1, is still larger than the sum of the others. By taking the 2, Alice leaves the 5 “protected” so Bob can’t take it on his next turn—he has to take one of the coins next to it, which will allow Alice to take it. Bob might as well take the 3, and then Alice takes the 5, leaving the 1 for Bob. Final score: Alice 7, Bob 4. So in this scenario, Alice actually wins if she plays her best. Can you come up with other sorts of example scenarios? …such as a game where the first player will end up with the same result no matter which first move they play (even though the two coins are different)? A game where the two players tie if they both play their best (even though all the coins are different)? A game where the first player loses when both players play their best? What if we allow negative or fractional coins?
This is a fun game to play, and it’s not at all obvious, in general, what the best strategy is. Here’s one for you to analyze. What is Alice’s best move if the game starts in the position shown below? And what is Bob’s best follow-up move? Leave your analysis in the comments!
To play this game realistically with a friend, one could use a sequence of real coins, e.g. 1 cent, 5 cent, 10 cent, and 25 cent coins (if using US coins). Does the analysis of the strategy become any easier if we restrict ourselves to only having these particular coin values (or any other particular set of coin values)?
It is actually feasible to write a computer program that will calculate the optimal play for any given sequence of coins^{2}; I may write about that a bit in a future post. For now, I want to share a conjecture about the game:
Conjecture: the first player always has a winning non-losing strategy.
That is, if the first player plays optimally, they will always win either win or tie. I strongly suspect this is true, but I have so far been unable to prove it. On the other hand, I don’t think my inability to prove it necessarily means it’s hard to prove; I think it just needs the right insight. Can you either prove this, or find a counterexample? (Note that it may be possible to prove this non-constructively, that is, it may be possible to prove that the first player will always win or tie without saying what their actual strategy is!)
I got this problem from the Consortium for Computing Sciences in Colleges Mid-South 2012 programming contest (problem C).↩
That’s why I got this problem from a programming contest!↩
A certain man buys 30 birds which are partridges, pidgeons, and sparrows, for 30 denari. A partridge he buys for 3 denari, a pigeon for 2 denari, and 2 sparrows for 1 denaro, namely 1 sparrow for 1/2 denaro. It is sought how many birds he buys of each kind.
Can you solve it?
I’ve written about this before, but it’s worth spelling it out for completeness’ sake. If you have a sum of something which is itself a sum, like this:
you can split it up into two separate sums:
(You can also sort of think of this as the sigma “distributing” over the sum.) For example,
Why is this? Last time, the fact that we can pull constants in and out of a sigma came down to a property of addition, namely, that multiplication distributes over it. This, too, turns out to come down to some other properties of addition. As before, let’s think about writing out these sums explicitly, without sigma notation.
First, on the left-hand side, we have something like
And on the right-hand side, we have
We can see that we get all the same terms, but in a different order. But as we are all familiar with, the order in which you add things up doesn’t matter: addition is both associative and commutative, so we can freely reorder terms and still get the same sum.^{1}
So “distributes over” sums! Let’s use the example from above to see how this can be useful. Suppose we want to figure out a closed-form expression for
If we didn’t otherwise know how to proceed we could certainly start by trying some examples and looking for a pattern. Or we could even be a bit more sophisticated and notice that this sum will be , so it must be less than the triangular number . But we don’t even need to be this clever. If we just distribute the sigma over the addition, we transform the expression into two simpler sums which are easier to deal with on their own:
The first sum is , that is, the th triangular number, which is equal to . The second sum is just ( times), so it is equal to . Thus, an expression for the entire sum is
As a double check, is this indeed three less than the nd triangular number?
Sure enough! Math works!
At least, as long as the sum is finite! This still works for some infinite sums too, but we have to be careful about convergence.↩
[Disclosure of Material Connection: Princeton Press kindly provided me with free review copies of these books. I was not required to write a positive review. The opinions expressed are my own.]
Liz McMahon, Gary Gordon, Hannah Gordon, and Rebecca Gordon
Princeton University Press, 2016
Most people are probably familiar with the card game SET: each card has four attributes (number, color, shading, shape) each of which can have one of three values, for a total of cards. The goal is to find “sets”, which consist of three cards where each attribute is either identical on all three cards, or distinct on all three cards. It’s a fun game, and because it has to do with combinations of things and pattern recognition, many people probably have the intuitive sense that it’s a “mathy” sort of game, or the sort of game that people who enjoy math would also enjoy
Well, it turns out, as the authors convincingly demonstrate, that the mathematics behind SET actually goes very deep. For example, did you know that there are exactly distinct SETs in an -dimensional version of the game? (The normal game that everyone plays has .) How about the fact that the SET deck is a concrete model of the four-dimensional affine geometry ? Did you know that the most cards you can have without a SET is 20, and that this is intimately connected to structures called maximal caps in affine geometries—and that no one knows how many cards you could have without a SET in a -dimensional (or higher) version of the game?
The authors explain all this, and much more (with a lot of humor^{1} along the way!), ranging through probability, modular arithmetic, combinatorics, geometry, linear algebra, and a bunch of other topics. The book begins gently, but by the end it gets into some fairly deep mathematics, and there are lots of exercises and projects at the end of each chapter. This book would make a fantastic resource for a middle school, high school, or undergraduate math club. I could even see using it as the textbook for some sort of extra/special topics class with some motivated students.
John Stillwell
Princeton University Press, 2016
I am a huge fan of Stillwell’s writing (almost six years ago I wrote a short review of another one of his books, Roads to Infinity) and I wasn’t disappointed. This book is definitely aimed at a more sophisticated audience than the SET book, but due to Stillwell’s lucid explanations it still manages to start out rather gently and holds many treasures even for the intrepid high school reader.
The book has two basic goals. The first is to simply lay out an overview of “elementary” mathematics, accessible in theory to anyone with a high school level mathematical background. “Elementary” mathematics refers not just to the sort of mathematics learned in grade school (arithmetic, fractions, and so on) but to the mathematics that would nowadays be viewed as “basic” by professional mathematicians—the sort of stuff that every professional mathematician is familiar with regardless of their specialty. In this respect the book is quite a tour de force, organized by areas of mathematics—arithmetic, computation, algebra, geometry, calculus, and so on—and in each area Stillwell manages to distill down the big ideas and the connections with other areas. He is a master expositor, and the text manages to be engaging and accessible without watering down the mathematics. I definitely learned new things from the book! One thing Stillwell does very well in particular is to explain not just the big ideas but the connections between them.
The other basic goal of the book is to explore the boundary between “elementary” and “advanced” mathematics. This sounds like it would be rather vague and amorphous—after all, aren’t the notions of “elementary” and “advanced” quite relative? Doesn’t it depend on how much background you have? Can’t math that is “elementary” to one person be “advanced” to someone else? This is all true, but Stillwell isn’t really talking about which areas of math are hard and which are easy. Professional mathematicians often talk about certain proofs being “elementary”, and it is often celebrated when someone finds an “elementary” proof of a theorem, even if that theorem had already been proved by “non-elementary” means, and even if the non-elementary proof was shorter. Stillwell is trying to pin down a precise meaning of this sense of “elementary”, and makes a well-reasoned case that it all comes down to infinity: something is non-elementary precisely when infinity enters into its proof in a fundamental way. This may seem rather arbitrary at first blush, but through a number of examples and surprising connections between different areas of mathematics, Stillwell makes it clear that this is an extremely “natural” place to draw a line in the sand. Not that having such a dividing line is in and of itself of any value—it’s simply fascinating to note that there is such a natural line at all, and by exploring it in depth we shed new light on the mathematics to either side of it.
They are extremely fond of footnotes. Reminds me of someone I know.↩
Could you explain how to take a constant outside of a summation and bring it inside the summation?
This made me realize there’s a lot more still to be explained! In particular, understanding what sigma notation means is one thing, but becoming fluent in its use requires learning a number of “tricks”. Of course, as always, they’re not really “tricks” at all: understanding what the notation means is the necessary foundation for understanding why the tricks work!
For today, we’ll start by considering what Kevin asked about. Consider what is meant by this sigma notation:
It doesn’t really matter what the ’s are, the point is just that each might be different, whereas is a constant that doesn’t change. So this can be expanded as
Since multiplication distributes over addition, we can factor out the :
The right-hand side can now be written as
so overall we have shown that
We usually omit the parentheses and just write
Our argument didn’t really depend on any of the specifics (like the fact that goes from to ). The general principle is that constants can “jump” back and forth across the sigma, which corresponds to multiplication distributing across addition.
The one remaining question is—what counts as a “constant”? The answer is, anything that doesn’t depend on the index variable. So the “constant” can even involve some variables, as long as they are other variables! For example,
In the context of this sum, is a “constant”, because it does not have in it. Since it doesn’t contain , it is going to be exactly the same for each term of the sum, which means it can be factored out.