I plan to write a longer post soon, but for the moment I just wanted to provide a quick update about the greedy coins game (previous posts here and here). It turns out that the game is a lot more interesting—and has generated a lot more discussion—than I thought when I first posted it!
In my previous post I conjectured that the only tied games are the ones that can be built up by surrounding a tied game with a coin () or concatenating two tied games (). That conjecture was quickly shot down, with commenters tinyboss and Thibault Vroonhove independently pointing out counterexamples: or are both tied, even though they can’t be built up in the conjectured way. As you can check, with best play in these games, one player will get and the other will get .
Thibault noted that these particular tied games can be made by starting with a symmetric game (like ) and then adding the same number to all the coins in one half (e.g. adding to each coin in the second half of yields ). However, although I think this always yields a tied game for games of length 4, it turns out that this does not always result in a tied game in general: for example, isn’t tied, since the coins add up to an odd total.
Thibault then came up with a counterexample to my other conjecture, namely, that is tied if and both are: is not tied, even though it is the concatenation of two tied games. In fact, even is not tied: Alice can guarantee that she will get at least three of the s.
Given all this it’s kind of amazing that the big game I constructed as a quick test of my conjectures, namely , is indeed tied. Eric Burgess reported that in fact every cyclic permutation of this game is tied!
Eric also introduced the function , where denotes the winning margin, with best play, of whoever’s turn it is. That is, starting from position , with best play, what is the difference between the current player’s score and the other player’s? (Note that in the case of Bob the “winning margin” can be negative, if Bob is going to lose even with best play.) If the current player takes coin , then denotes the winning margin of the other player from that point on, which is the same as the losing margin of the current player. But they also got ; so overall their winning margin will be . Of course, best play means that the current player will choose whichever coin gives them the higher winning margin. This motivates the following inductive definition of , where we use to denote the maximum of and :
Thibault conjectured that the following strategy for Alice is optimal (I’ve rephrased it in a way that I’m pretty sure is equivalent, and which I think makes it more clear):
If the two end coins have unequal values, and the larger of the two is also larger than its neighbor, take it.
Otherwise, look at the alternating totals (i.e. the total of the “red” coins and the total of the “blue” coins). Take the end coin corresponding to whichever group has the higher total.
If neither of these rules apply then it doesn’t matter which coin Alice takes.
If this strategy is indeed optimal, that would be really cool, since it is pretty easy to apply in practice. But I am not ready to say whether I believe this conjecture or not. In any case it should be possible to test this strategy with a computer program to play lots of random games, which I hope to do soon (and to write another post about it).
I should slightly modify my conjecture : if neither of these rules apply, then take the coin so that your loss is minimized at this turn. For example, in the game , Alice should take the 5 because .
Wait, not at all actually. Alice can also force a win by taking the 3. I don’t think it doesn’t matter in every case, but here it doesn’t.
A game where it matters is . The only winning move is to take the 3.
I wrote a program to search for counterexamples to your strategy, and the smallest one I found is . None of the rules of your strategy apply, including the fact that the local loss from taking either the 2 or the 3 is equal. However, the only winning move is to take the 3 on the right.
is perhaps a better counterexample: the winning move is still to take the 3 on the right, even though the local loss (-2) is actually more than the local loss from taking the 2 (-1).
I’ve been thinking about classifying games according to the sequence of coins taken under optimal play. For example, the optimal sequence for the length-2 game is if , and if . If then both sequences are optimal and neither is preferred.
(My previous posts generalized to games with odd numbers of coins, but here I only consider games with an even number.)
Call a game interior if it has a unique optimal play sequence, and boundary if it has more than one.
Conjecture: Tied games are properly contained in boundary games.
Easy half of the proof: the game is boundary but not tied. The other half is to prove that every tied game has more than one optimal play sequence. The terminology gives an idea of my line of reasoning, but I want to nail it all the way down before I post it. I also suspect there may be a short and sweet proof ala the Alice-can’t-lose theorem.
Let be the set of all games with coins. Define to be the set of legal play sequences for a game . Let denote the resulting total for the first player (Alice) if the game is played with sequence .
(Breaking here to make sure the LaTeX I’m using is supported…)
Brent, your comment at the end of this entry really got me thinking about generating random games. Even if we confine ourselves to integer-valued coins, it’s a tricky task. How high (or low) do we go? Do we miss out on anything interesting by using integers, or by bounding the values of our coins? Can we do a uniform sample in any reasonable sense?
It’s obvious that and are the “same” game, equally obvious that and are not, and almost as obvious that and are again the same. How can we make this precise? My current definition of “same” is that two games are equivalent if you play them the same way.
(Along the way I came up with the conjecture above, and I’ve reduced the handwaving to an amount I’m comfortable with, so here’s my almost-proof.)
Identify games (sequences of coins) with vectors in in the obvious way. At each turn, the player can choose the leftmost or rightmost coin, so for a -coin game, a play sequence (I’ll just say “sequence”) is a string of L’s and R’s of length (because the last move is forced). A sequence is optimal if it can arise from the definition of given previously, equivalently, if it results in the maximum score for Alice. For example, has the unique optimal sequence RRR, giving Alice a 6-4 win, and has four optimal sequences (all the ones beginning with L) with Alice winning 3-2.
Definition: Two games are equivalent if they have the same set of optimal sequences.
Lemma 1: Every game with a unique optimal sequence has a neighborhood of equivalent games.
Proof: Suppose has a single optimal sequence. That means that at each turn there is a unique optimal choice of coin to take, so (referring back to the definition of ) we have . Since there are finitely many turns, there is a minimum amount by which any choice is decided. (Begin handwaving…) I’m confident that there’s an inductive (or otherwise) argument to show that changing one of the by changes the value of by at most . (End handwaving.) So we can find an -ball around where the same sequence of choices is forced.
Lemma 2: Every neighborhood of a tied game contains a game not equivalent to it.
Proof: Suppose is tied, choose an optimal sequence for , and let be a coin that Alice chooses under . If we reduce the value of by any amount, no matter how small, then is no longer optimal, because Alice loses.
From the two lemmas it follows that if a game is tied, then it has more than one optimal sequence.
One thing I need to nail down is whether a tied game can be equivalent to a non-tied game. I certainly hope not, otherwise it’s not a very good notion of “sameness”!
I’ll post again with more about the structure of the space of games and how we might do random sampling.
We can’t uniformly sample . But if we consider equivalence classes of games, the situation is much better. Write if and have the same set of optimal sequences.
Since we’ve identified with the vector space , we can add games to one another and multiply them by real numbers. If is a positive real number and is any real number, then it’s not hard to see that:
(1) , and
(Because and , the optimal choices are all the same.)
This means that the set of games equivalent to contains at least . If then this is a two-dimensional open half-subspace of . In other words, the equivalence classes of games are unions of open half 2-planes whose boundary is . An exceptional class is the line spanned by (which I’ll call the zero equivalence class, since these games are all equivalent to ).
Each non-zero class is a union of half-planes, each of which contains a unique unit-length vector orthogonal to , which we’ll choose as the representative of that plane (remember many planes will belong to the same game equivalence class). All the non-zero class representatives together form a sphere of dimension , which is divided up into game equivalence classes in some way, and which we can sample uniformly. I propose that this is a reasonable way of choosing “random” games.
If we let the zero class be represented by , which seems like the only reasonable choice, then it’s an isolated point and would be excluded from this way of sampling. I don’t know if this is saying anything interesting or not.
Because we go up by two dimensions every time we increase the game size, we unfortunately skip straight from trivial to mind-bending, but let’s at least look at the trivial case, games of length 2. We had better come up with three classes, corresponding to the left coin being smaller than, larger than, or equal to the right coin.
We identify with , so that the game corresponds to the vector . There are exactly two open half 2-planes with boundary : one where , represented by , and one where , represented by . A zero-dimensional sphere is defined to be two disconnected points. Together with the zero class, that gives us the three classes of length-2 games, so we've passed the trivial case.
If I haven't made any unfixable mistakes, then the non-zero equivalence classes of length-4 games should be mapped out on a 2-dimensional sphere, which could actually be visualized. Since there are 8 possible sequences and we classify games by the (nonempty) subset of these which are optimal, there are at most 255 equivalence classes (although I suspect the actual number is much smaller). I hope to be able to produce the map. If I'm right about all this but can't nail down the map itself, then the open regions, at least, can be mapped pretty easily with a Monte Carlo method.
Oops, the sentence near the top that reads,
This means that the set of games equivalent to contains at least
…should instead be
This means that the set of games equivalent to contains at least .
I got confused between the half-planes and the equivalence classes, and I thought I fixed all the errors, but I missed (at least) that one.
Oops, one more error…right after that sentence, the not-equal sign should be not-an-element-of.
Also, I forgot to normalize my representatives in the dimension-2 example. Dang.
I fixed these errors and a few others for you, except I didn’t fix the non-normalized representatives (it’s much nicer not to have the ‘s in there anyway, and since it really doesn’t matter!).
I’ve only just now gotten a chance to read through this carefully, and it’s amazing! Would you consider writing a guest post (or two or three…) about it?
Have you made any progress on computing/visualizing the equivalence class map for ?
The following lemma proves that a tied game can’t be equivalent to a non-tied game.
Lemma: is tied iff both and are optimal sequences.
Proof: Suppose is tied. Since Alice can force the tie by using the red/blue strategy and no matter what Bob does, both and have to be optimal sequences.
Conversely, if both and are optimal sequences, then we have that and , and it follows that is tied.
Fantastic! Your result also implies my (not-quite-proved) statement that tied games have more than one optimal sequence. Very slick.
Nope, I’m wrong again. In , neither nor are optimal sequences, sorry…
But I can rephrase it like this: $G$ is tied iff there are both optimal sequences such that Alice gets all the red coins and ones such that Alice gets all the blue coins. So the consequence still holds.
[And I forgot the “latex” so the $G$ should be ]
Good catch and save.
Pingback: Another greedy coins game update | The Math Less Traveled