Last time I explained a method for computing best play for instances of the greedy coins game, which is feasible even for large games. This general approach is known as *dynamic programming* and is applicable whenever we have some recursively defined thing where the recursion generates a lot of overlap/duplication: in our case, the game tree and the values for Alice are defined recursively, but we saw that there is a lot of duplication of subgames. The solution is to organize things in such a way that we never have to look at a particular subthing more than once.

In practice, instead of thinking about trees, we can just keep an upper-triangular matrix , where will denote Alice’s best score from the point when only coins are left. (The matrix is upper-triangular since this only makes sense when .) We can also keep a separate upper-triangular matrix where is the best move when coins are left ( means that both moves are equally good).

When coins are left, either coin or coin will be taken, leaving coins or . So, if we already know the values of and , we can use them to compute the optimal value for (and to decide which move is better). This corresponds to the observation that we can compute the value at a node in the game tree as long as we already know the values at both of its children.

Here is one way to visualize these tables, turned diagonally so it ends up looking very similar to the compact trees from my previous post; each cell corresponds to the coins along the base of the triangle which has that cell as its apex. The light blue square in each cell shows the value of ; the arrows indicate the best move(s) , with blue arrows for Alice’s moves and green for Bob’s.

For example, the top cell says that from this state (when all four coins remain) Alice will get 5 points with best play, and the two blue arrows mean that it does not matter which coin Alice takes. Suppose she takes the , so the are left. The corresponding cell is the cell at the apex of the triangle whose base is :

So now Bob can look at this cell to see what his optimal play is. He can see that from this position Alice will get 2 more points if they both play their best. He can also see from the green arrow that his best move is to move into the cell below and to the left, that is, to leave Alice with the coins —which means he should take the coin on the *right*, namely, the . Finally, Alice’s best move in this situation is to take the on the right, with the blue arrow pointing to what Bob is left with.

Using this visualization we can easily look at bigger games. For example, in my first post I left readers with the challenge of analyzing this game:

From the table we can now see that Alice will score points with best play, and that her best move is to start by taking the (the blue arrow points to the right, so she should take the on the left in order to send Bob to the game on the right). It doesn’t matter which move Bob plays next, and then Alice will take either the or the , depending on Bob’s move, and so on.

One nice thing to note is that these tables don’t just tell us what should happen when *both* players play optimally. They tell us the optimal play in *any* subgame. In other words, one could say that they even show us how to best capitalize on mistakes made by our opponent. To play the greedy coins game perfectly, first just compute the tables and (actually, this is not too hard to learn how to do by hand, especially if you use the above format). Then when it is your turn, if coins remain just look up to see what your best move is. If you have used the above format you don’t even need to bother with keeping track of the indices and ; just find the remaining coins along the bottom and find the apex of their triangle. (In addition to finding your best move you can also confidently, and annoyingly, announce to your opponent that you will get at least points no matter what they do; for extra annoyingness, you can let your opponent choose your move whenever the move table tells you that both moves are optimal.)

Just for fun, here’s an analysis of the slightly larger game , which is a counterexample to one of my original conjectures about tied games:

One final thing I will mention is that it’s hard to tell from looking at Alice’s *total* score whether the game is tied, or how much Alice will win by. Of course we can compute it if we know the total value of all the coins: Alice will win by the difference between her total and half the total coin value. But it might be nicer to directly visualize not Alice’s *total* score but the *margin* of her victory over Bob. This is related to the function defined by Eric Burgess in a previous comment; more on this in a future post.