In my previous post I described the “greedy coins game” and conjectured that the first player always has a strategy to win or at least tie. I had been unable to prove it, but suspected there must be some nice proof. Well, commenter Thibault Vroonhove rose to the challenge and came up with a beautiful, simple proof!
Color the coins alternately red and blue, like this:
Note that if she wants, Alice can play so as to take all the red coins and force Bob to take all the blue coins, or vice versa. For example, if she wants all the red coins, she starts by taking the red coin on the end, leaving Bob with no choice but to take a blue coin. Bob’s choice then exposes another red coin for Alice to take, leaving Bob with only blue coins to choose from, and so on. Symmetrically, Alice could always take the one available blue coin on her turn, leaving Bob with no choice but to take all the red coins.
So, if the sums of the red and blue coins are different, Alice can guarantee a win simply by choosing the set of coins with the higher total. If the red and blue coins have equal sums, Alice can at least guarantee a tie by arbitrarily choosing one set or the other.
Let’s call this strategy (taking either all the coins in even positions or all the coins in odd positions, whichever has the higher total) the “red/blue strategy”. This strategy constitutes a really elegant proof that Alice can never lose. But it’s not at all the end of the story—in particular, the red/blue strategy is not necessarily Alice’s best strategy! For example, consider this setup:
Notice that the “red” and “blue” coins have equal sums: . So if Alice just plays according to the red/blue strategy the result will be a tie. But Alice can do better: she should first take the 3, leaving Bob no choice but to take one of the 2s. But now there is a 1 and a 2 left, and Alice should obviously take the 2 (whereas the red/blue strategy would tell her to take the 1). So with best play Alice wins by a score of 5 to Bob’s 3.
This raises the natural question: when is a tie Alice’s best result? In other words, what sort of setups force a tie if both players play their best? Let’s call such setups “tied”.
As a first example, it is not hard to prove that symmetric games are tied. By a symmetric game I mean a game which stays the same when the order of the coins is reversed, i.e. where . For example,
Theorem. Symmetric games are tied.
Proof. Bob can force a tie by always playing on the opposite side from Alice: that is, if Alice takes then Bob takes
, which has the same value by assumption. Since we already know that Alice will at least get a tie with best play, this must be the best result for both players (in particular, we know Bob cannot do any better with a different strategy).
However, this theorem does not completely characterize tied games. For example, the game
is also tied, even though it is not symmetric: each player will get 3 (if Alice starts by taking the 2, Bob takes the other 2, and then they split the 1s; if Alice starts by taking the 1, then it doesn’t matter what Bob takes next; on her next turn Alice will take a 2).
Can we completely characterize which games are tied? I will close by stating an observation, a theorem, and a few conjectures.
Observation. The empty game (consisting of zero coins) is tied, since after zero moves both players end with a value of zero. (This one is not really worth calling a theorem!)
Theorem. If is tied, then so is
, the game obtained from
by adding two copies of the coin
, one on each end.
Proof. Bob can force a tie: after Alice takes one copy of , Bob takes the other one, and now the game is reduced to
, which is tied by assumption.
Note that all symmetric games can be built up in this way, by starting with the empty game and successively adding “layers” that look like . So this observation and theorem together constitute an alternate, inductive proof that all symmetric games are tied.
If and
are games (that is, sequences of an even number of coins), let
denote the game obtained by concatenating the sequences of coins. For example, if
and
then
.
Conjecture. If and
are both tied, then so is
.
For example, we know and
are both tied since they are symmetric. This conjecture then claims that the game
is tied as well (which happens to be true in this case, as you can check).
As a more interesting example, the foregoing theorem and conjecture (if true) could together be used to show that is tied (I’m tired of writing the commas in between single-digit coin values), because this game can be broken down as
. And guess what, I have verified computationally that this game is indeed tied! (I’ll discuss how to do this in a future post.)
Conjecture. This completely characterizes tied games: a game is tied if and only if it is inductively built up, starting from the empty game, by applications of the foregoing theorem and conjecture.
I believe this conjecture less than I believe the first one. There might be other strange ways to build tied games.
We can pretty easily prove a weaker version of your last conjecture: If
and
are symmetric, then
is tied. Bob always takes from the same end as Alice and forces the tie.
Hmm, I don’t think that strategy quite works for Bob. For example, consider 223113. If Bob always takes from the same end as Alice, then Alice could take a 2 and both 3s. After the 2s have been taken, when Alice takes the first 3, instead of taking from the same end Bob has to take the other 3.
I believe 1,2,4,3 is tied.
Good point!
This looks a lot like building up a context-free language (using push-down automata)…
The empty word is a tied game (trivially). If G is a tied game (a word from a context-free language), so is cGc. And the concatenation is an operation under which CF languages are closed. I’d have to think a bit more, if actually all the operations under which CF languages are closed would work here…
Hmm, interesting observations!
Indeed it should hold, that if the coins form a word of a Dyck language (numbers interpreted as brackets), than the game is tied.
However, there are obviously games that are tied and do not form Dyck words.
Actually, Thibault Vroonhove came up with a counterexample, see below!
To the second conjecture, it might be pointed out as a counter-example that
is tied. Indeed, if Alice takes the 1, then Bob can force a tie by taking the 3 – and if Alice takes the 2, then Bob can force a tie by taking the 4. I think, however, that it could still hold if you say that adding the same number to the second half of a symmetric game still leaves it tied.
The first conjecture seems a bit harder to prove (or disprove), though.
Ah, good point! I will definitely have to rethink that second conjecture. As for the first conjecture, personally I think there is probably a nice proof that just needs the right insight. I have been wrong before, though. =)
I don’t think it’s true that adding the same number to the second half of a symmetric game leaves it tied. For example, 123432 is a win for Alice (it has to be, since the sum of all the coins is odd).
Indeed 🙂
(tied) and
(tied). You have
, which is not tied
For the first conjecture, let
Oh! Right you are. Well, back to the drawing board. =)
Too bad that the “brackets” are not independent of each other.
I’ve thought about an improved red/blue strategy :
Let us say that Alice is taking all the red coins. Then :
– if the blue coin that Alice could take is greater than the two red coins that would be left to Bob, then Alice improves her score by taking that blue coin (what we could call a ponctual change of mind).
– if it turns out that the total of remaining blue coins is greater than the total of remaining red coins, then Alice improves her score by taking all the remaining blue coins (what we could call a global change of mind).
Conjecture : this strategy is Alice’s best strategy
Interesting fact (verified computationally): every cyclic permutation of the large tie game given in the blog post,
, is also tied, a property that does not hold for tied games in general. I wonder if it’s a consequence of the way it was constructed? I haven’t tried to get a sense of whether it’s a common or unusual property; I don’t see a straightforward way to enumerate tie games (or games).
Fascinating! I do note that for any game constructed in that way, all its cyclic permutations can also be constructed in that way. But we have already falsified the conjecture that all games constructed in this way are tied, so I’m not sure what to make of this! =)
A few other minor observations before I log off:
Let
denote the maximum score Alice can achieve with optimal play if
is even, and the maximum Bob can achieve if
is odd. Let
denote
. We can define
inductively:
(1)
,
(2)
,
(3)
.
A game
is tied if
. (It is possible for odd games to be tied, e.g.
.) It has already been shown that
(4)
.
From there we can observe that
(5)
, and so
(6)
.
These don’t go all that far towards characterizing tied games, but (6) is sometimes enough to quickly recognize that a game is not tied, e.g. Thibault’s
.
For (6), we’re assuming WLOG that Alice chooses
. We could make the corresponding statement for the case where she takes
, or let it follow from the intuitively obvious fact that
(which we can prove by induction on
).
I see, you are using
to denote not the total amount of coins Alice or Bob can achieve, but actually their winning margin (i.e. the difference between their total and their opponent’s total), right? This does seem like a particularly nice way to look at it, I had been stuck thinking about totals rather than thinking about winning margin.
I am especially intrigued by your equation (6), which seems very related to part of a strategy described by Thibault in a comment above: “if the blue coin that Alice could take is greater than the two red coins that would be left to Bob, then Alice improves her score by taking that blue coin.”
Interesting fact : let
. Then Alice can force a win if and only if at least one of the three following statements is true :


–
–
–
Pingback: Greedy coins game update | The Math Less Traveled
Pingback: Another greedy coins game update | The Math Less Traveled
Pingback: Computing optimal play for the greedy coins game, part 4 | The Math Less Traveled