What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

In my previous post, we computed for some small and conjectured that the answer is , since these powers seem to be alternately just under and just over an integer. Today, I’ll explain a clever solution, which I learned from Colin Wright (several commenters also posted similar approaches).

First, let’s think about expanding using the Binomial Theorem:

We get a sum of powers of with various coefficients. Notice that when is raised to an *even* power, we get an integer: , , and so on. The odd powers give us irrational things. So if we could find some way to “cancel out” the odd, irrational powers, we would be left with a sum of a bunch of integers.

Here is where we can pull a clever trick: consider . If we expand it by the Binomial Theorem, we find

but this is the same as the expansion of , with alternating signs: the odd terms—which are exactly the irrational ones—are negative, and the even terms are positive. So if we add these two expressions, the odd terms will cancel out, leaving us with two copies of all the even terms:

For now, we don’t care about the value of the sum on the right—the important thing to note is that it is an integer, since it is a sum of integers multiplied by *even* powers of , which are just powers of two.

We are almost done. Notice that , so . Since this has an absolute value less than , its powers will get increasingly close to zero; since it is negative, its powers will alternate between being positive and negative. Hence,

is an integer, and is very small, so must be very close to that integer. When is even, is positive, so must be slightly less than an integer; conversely, when is odd we conclude that is slightly greater than an integer.

To complete the solution to this particular problem, we have to make sure that is *small enough* that we can say for sure the 99th digit after the decimal point of is still 9. That is, we need to prove that, say, . This will be true if we can show that (just raise both sides to the th power), and in turn, taking the base 10 logarithm of both sides, this will be true if . At this point we can simply confirm by computation that . The fact that we get means that not just 99, but actually the first 191 digits after the decimal point of are 9. (It turns out that the 192nd digit is a .)

The rabbit hole goes much deeper than this, however!

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

Let’s play around with this a bit and see if we notice any patterns. First, itself is approximately

so its powers are going to get large. Let’s use a computer to find the first ten or so:

Sure enough, these are getting big (the tenth power is already bigger than ), but look what’s happening to the part after the decimal: curiously it seems that the powers of are getting rather close to being integers! For example, is just under , only about away.

At this point, I had seen enough to notice and conjecture the following patterns (and I hope you have too):

- The powers of seem to be getting closer and closer to integers.
- In particular, they seem to alternate between being just
*under*an integer (for even powers) and just*over*an integer (for odd powers).

If this is true, the decimal expansion of must be of the form for some big integer and some number of s after the decimal point. And it seems reasonable that if Colin is posing this question, it must have more than 99 nines, which means the answer would be 9.

But *why* does this happen? Do the powers really keep alternating being just over and under an integer? And how close do they get—how do we know for sure that is close enough to an integer that the 99th digit will be a 9? This is what I want to explore in a series of future posts—and as should come as no surprise it will take us on a tour of some fascinating mathematics!

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

Of course, it’s simple enough to use a computer to find the answer; any language or software system that can compute with arbitrary-precision real numbers can find the correct answer in a fraction of a second. But that’s obviously not the point! Can we use logical reasoning to *deduce* or *prove* the correct answer, without doing lots of computation? Even if we find the answer computationally, can we explain *why* it is the right answer? Solving this puzzle took me down a fascinating rabbit hole that I’d like to share with you over the next post or three or eight.

For the moment I’ll just let you think about the puzzle. Although using a computer to simply compute the answer is cheating, I do encourage the use of a computer or calculator to try smaller examples and look for patterns. It is not too hard to see a pattern and conjecture the right answer; the interesting part, of course, is to figure out why this pattern happens, and to prove that it continues.

]]>

]]>

]]>

Go is a very old board game, invented in ancient China over 2500 years ago. Players take turns playing white and black stones on the intersections of a grid, with the goal being to surround more territory than the opponent. The rules themselves are actually quite simple—it takes just a few minutes to learn the basics—but the simple rules have complex emergent properties that make the strategy incredibly deep. Top human players spend their whole lives devoted to studying and improving at the game.

I enjoy playing Go for many of the same reasons that I love math—it is beautiful, deep, often surprising, and rewards patient study. If you want to learn how to play—and I highly recommend that you do—try starting here!

Ever since IBM’s Deep Blue beat world champion Garry Kasparov in a chess match in 1997, almost 20 years ago, the best chess-playing computer programs have been able to defeat even top human players. Go, on the other hand, is much more difficult for computers to play. There are several reasons for this:

- The number of possible moves is
*much*higher in Go than in chess, and games tend to last much longer (a typical game of Go takes hundreds of moves as opposed to around 40 moves for chess). So it is completely infeasible to just try all possible moves by brute force. - With chess, it is not too hard to evaluate who is winning in a particular position, by looking at which pieces they have left and where the pieces are on the board; with Go, on the other hand, evaluating who is winning can be extremely difficult.

Up until just a few years ago, the best Go-playing programs could play at the level of a decent amateur player but could not come anywhere close to beating professional-level players. Most people thought that further improvements would be very difficult to achieve and that it would be another decade or two before computer programs could beat top human players. So AlphaGo came as quite a surprise, and is based on recent fundamental advances in machine learning techniques (which are already having lots of other cool applications).

It is particularly interesting that AlphaGo works in a much more “human-like” way than Deep Blue did. Deep Blue was able to win at chess essentially by being really fast at evaluating lots of potential moves—it could analyze hundreds of millions of positions per second—and by consulting giant tables of memorized positions. Chess playing programs are better than us at chess, yes, but as far as I know we haven’t particularly learned anything from them. AlphaGo, on the other hand, “learned” how to play go by studying thousands of human games and then improving by playing itself many, many times. It uses several “neural networks”—a machine learning technique which is ultimately modeled on the structure of the human brain—both to predict promoising moves to study and to evaluate board positions. Of course it also plays out many potential sequences of moves to evaluate them. So it plays using a combination of pattern recognition and speculatively playing out potential sequences of moves—which is exactly how humans play. The amazing thing is that AlphaGo has actually taught us new things about Go—on many occasions it has played moves that humans describe as surprising and beautiful. It has also played moves that the accepted wisdom said were bad moves, but AlphaGo showed how to make them work. One might expect that people in the Go world might feel a sense of loss upon being beaten by a computer program—but the feeling is actually quite the opposite, because of the beatiful way AlphaGo plays and how much it has taught us about the game.

]]>

]]>

]]>

`@byorgey`

.
Here’s my initial entry to the `#proofinatoot`

contest—the idea is to write a proof that fits in Mastodon’s 500-character limit for “toots” (you know, like a tweet, but more mastodon-y). To fit this proof into 500 characters I had to leave out a lot of details; it was a fun exercise to take a cool proof and try to distill it down to just its core ideas. Can you fill in the details I omitted? (Also, can you figure out what word is commonly used to refer to graphs with these properties?)

Let be a graph with . Any two of the following imply the third: 1. is connected; 2. is acyclic; 3. has edges.

: by induction. Any walk must reach a leaf. Delete it and apply the IH.

: by induction. Sum of degrees is , so there are at least two leaves. Delete one and apply the IH.

: Let have connected components. Since for each, the total number of edges is , hence .

]]>

In practice, instead of thinking about trees, we can just keep an upper-triangular matrix , where will denote Alice’s best score from the point when only coins are left. (The matrix is upper-triangular since this only makes sense when .) We can also keep a separate upper-triangular matrix where is the best move when coins are left ( means that both moves are equally good).

When coins are left, either coin or coin will be taken, leaving coins or . So, if we already know the values of and , we can use them to compute the optimal value for (and to decide which move is better). This corresponds to the observation that we can compute the value at a node in the game tree as long as we already know the values at both of its children.

Here is one way to visualize these tables, turned diagonally so it ends up looking very similar to the compact trees from my previous post; each cell corresponds to the coins along the base of the triangle which has that cell as its apex. The light blue square in each cell shows the value of ; the arrows indicate the best move(s) , with blue arrows for Alice’s moves and green for Bob’s.

For example, the top cell says that from this state (when all four coins remain) Alice will get 5 points with best play, and the two blue arrows mean that it does not matter which coin Alice takes. Suppose she takes the , so the are left. The corresponding cell is the cell at the apex of the triangle whose base is :

So now Bob can look at this cell to see what his optimal play is. He can see that from this position Alice will get 2 more points if they both play their best. He can also see from the green arrow that his best move is to move into the cell below and to the left, that is, to leave Alice with the coins —which means he should take the coin on the *right*, namely, the . Finally, Alice’s best move in this situation is to take the on the right, with the blue arrow pointing to what Bob is left with.

Using this visualization we can easily look at bigger games. For example, in my first post I left readers with the challenge of analyzing this game:

From the table we can now see that Alice will score points with best play, and that her best move is to start by taking the (the blue arrow points to the right, so she should take the on the left in order to send Bob to the game on the right). It doesn’t matter which move Bob plays next, and then Alice will take either the or the , depending on Bob’s move, and so on.

One nice thing to note is that these tables don’t just tell us what should happen when *both* players play optimally. They tell us the optimal play in *any* subgame. In other words, one could say that they even show us how to best capitalize on mistakes made by our opponent. To play the greedy coins game perfectly, first just compute the tables and (actually, this is not too hard to learn how to do by hand, especially if you use the above format). Then when it is your turn, if coins remain just look up to see what your best move is. If you have used the above format you don’t even need to bother with keeping track of the indices and ; just find the remaining coins along the bottom and find the apex of their triangle. (In addition to finding your best move you can also confidently, and annoyingly, announce to your opponent that you will get at least points no matter what they do; for extra annoyingness, you can let your opponent choose your move whenever the move table tells you that both moves are optimal.)

Just for fun, here’s an analysis of the slightly larger game , which is a counterexample to one of my original conjectures about tied games:

One final thing I will mention is that it’s hard to tell from looking at Alice’s *total* score whether the game is tied, or how much Alice will win by. Of course we can compute it if we know the total value of all the coins: Alice will win by the difference between her total and half the total coin value. But it might be nicer to directly visualize not Alice’s *total* score but the *margin* of her victory over Bob. This is related to the function defined by Eric Burgess in a previous comment; more on this in a future post.

]]>