…and this picture of primitive roots I made a year ago:

At first I didn’t see the connection, but Snowball was absolutely right. Once I understood it, I made this little animation to illustrate the connection more clearly:

(Some of the colors flicker a bit; I’m not sure why.)

]]>

I learned from Lucas A. Brown that this is sometimes known as “Euclid’s Orchard”. Imagine that there is a tall, straight tree growing from each grid point other than the origin. If you stand at the origin, then the trees you can see are exactly those at grid points with . This is because if a tree is at for some , then it is blocked from your sight by the tree at : both lie exactly along the line from the origin with slope . But if a tree is at some point with relatively prime coordinates , then it will be the first thing you see when you look along the line with slope exactly .

(…well, actually, all of the above is only really true if we assume the trees are infinitely skinny! Otherwise trees will end up blocking other trees which are almost, but not quite, in line with them. So try not to breathe while standing at the origin, OK? You might knock over some of the infinitely skinny trees.)

Here’s the portion of the grid surrounding the origin, with the lines of sight drawn in along with the trees you can’t see because they are exactly in line with some closer tree. (I’ve made the trees skinny enough so that they don’t accidentally block any other lines of sight—but if we expanded the grid we’d have to make the trees even skinner.)

Now, what about the colors of the dots? Commenter Snowball guessed this correctly: each point is colored according to the number of steps needed for the Euclidean algorithm needed to reach 1. Darker colors correspond to more steps. It is interesting to note that there seems to be (eight symmetric copies of) one particularly dark radial stripe, indicated below:

In fact, the slope of this stripe is exactly ! This corresponds to the fact (first proved by Gabriel Lamé in 1844) *that consecutive Fibonacci numbers are worst-case inputs to the Euclidean algorithm*—that is, it takes more steps for the Euclidean algorithm to compute than for any other inputs of equal or smaller magnitude. Since the ratio of consecutive Fibonacci numbers tends to , the dots with the darkest color relative to their neighbors all lie approximately along the line with slope . What’s interesting to me is that lots of other dots that lie close to this line are also relatively dark. Why does this happen?

]]>

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

However, the solution depended on having the clever idea to add . But there are other ways to come to similar conclusions, and in fact this is not the way I originally solved it.

The first thing I did when attacking the problem was to work out some small powers of by hand:

and so on. It quickly becomes clear (if you have not already seen this kind of thing before) that will always be of the form . Let’s define and to be the coefficients of the th power of , that is, . Now the natural question is to wonder what, if anything, can we say about the coefficients and ? Quite a lot, as it turns out!

We can start by working out what happens when we multiply by another copy of :

But by definition, so this means that and . As for base cases, we also know that , so and . From this point it is easy to quickly make a table of some of the values of and :

Each entry in the column is the sum of the and from the previous row; each is the sum of the previous and twice the previous . You might enjoy playing around with these sequences to see if you notice any patterns. It turns out that there is an equivalent way to define the and separately, such that each only depends on previous values of , and likewise each only depends on previous . I’ll explain how to do that next time, but leave it as a challenge for you in the meantime!

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

In my previous post, we computed for some small and conjectured that the answer is , since these powers seem to be alternately just under and just over an integer. Today, I’ll explain a clever solution, which I learned from Colin Wright (several commenters also posted similar approaches).

First, let’s think about expanding using the Binomial Theorem:

We get a sum of powers of with various coefficients. Notice that when is raised to an *even* power, we get an integer: , , and so on. The odd powers give us irrational things. So if we could find some way to “cancel out” the odd, irrational powers, we would be left with a sum of a bunch of integers.

Here is where we can pull a clever trick: consider . If we expand it by the Binomial Theorem, we find

but this is the same as the expansion of , with alternating signs: the odd terms—which are exactly the irrational ones—are negative, and the even terms are positive. So if we add these two expressions, the odd terms will cancel out, leaving us with two copies of all the even terms:

For now, we don’t care about the value of the sum on the right—the important thing to note is that it is an integer, since it is a sum of integers multiplied by *even* powers of , which are just powers of two.

We are almost done. Notice that , so . Since this has an absolute value less than , its powers will get increasingly close to zero; since it is negative, its powers will alternate between being positive and negative. Hence,

is an integer, and is very small, so must be very close to that integer. When is even, is positive, so must be slightly less than an integer; conversely, when is odd we conclude that is slightly greater than an integer.

To complete the solution to this particular problem, we have to make sure that is *small enough* that we can say for sure the 99th digit after the decimal point of is still 9. That is, we need to prove that, say, . This will be true if we can show that (just raise both sides to the th power), and in turn, taking the base 10 logarithm of both sides, this will be true if . At this point we can simply confirm by computation that . The fact that we get means that not just 99, but actually the first 191 digits after the decimal point of are 9. (It turns out that the 192nd digit is a .)

The rabbit hole goes much deeper than this, however!

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

Let’s play around with this a bit and see if we notice any patterns. First, itself is approximately

so its powers are going to get large. Let’s use a computer to find the first ten or so:

Sure enough, these are getting big (the tenth power is already bigger than ), but look what’s happening to the part after the decimal: curiously it seems that the powers of are getting rather close to being integers! For example, is just under , only about away.

At this point, I had seen enough to notice and conjecture the following patterns (and I hope you have too):

- The powers of seem to be getting closer and closer to integers.
- In particular, they seem to alternate between being just
*under*an integer (for even powers) and just*over*an integer (for odd powers).

If this is true, the decimal expansion of must be of the form for some big integer and some number of s after the decimal point. And it seems reasonable that if Colin is posing this question, it must have more than 99 nines, which means the answer would be 9.

But *why* does this happen? Do the powers really keep alternating being just over and under an integer? And how close do they get—how do we know for sure that is close enough to an integer that the 99th digit will be a 9? This is what I want to explore in a series of future posts—and as should come as no surprise it will take us on a tour of some fascinating mathematics!

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

Of course, it’s simple enough to use a computer to find the answer; any language or software system that can compute with arbitrary-precision real numbers can find the correct answer in a fraction of a second. But that’s obviously not the point! Can we use logical reasoning to *deduce* or *prove* the correct answer, without doing lots of computation? Even if we find the answer computationally, can we explain *why* it is the right answer? Solving this puzzle took me down a fascinating rabbit hole that I’d like to share with you over the next post or three or eight.

For the moment I’ll just let you think about the puzzle. Although using a computer to simply compute the answer is cheating, I do encourage the use of a computer or calculator to try smaller examples and look for patterns. It is not too hard to see a pattern and conjecture the right answer; the interesting part, of course, is to figure out why this pattern happens, and to prove that it continues.

]]>

]]>

]]>

Go is a very old board game, invented in ancient China over 2500 years ago. Players take turns playing white and black stones on the intersections of a grid, with the goal being to surround more territory than the opponent. The rules themselves are actually quite simple—it takes just a few minutes to learn the basics—but the simple rules have complex emergent properties that make the strategy incredibly deep. Top human players spend their whole lives devoted to studying and improving at the game.

I enjoy playing Go for many of the same reasons that I love math—it is beautiful, deep, often surprising, and rewards patient study. If you want to learn how to play—and I highly recommend that you do—try starting here!

Ever since IBM’s Deep Blue beat world champion Garry Kasparov in a chess match in 1997, almost 20 years ago, the best chess-playing computer programs have been able to defeat even top human players. Go, on the other hand, is much more difficult for computers to play. There are several reasons for this:

- The number of possible moves is
*much*higher in Go than in chess, and games tend to last much longer (a typical game of Go takes hundreds of moves as opposed to around 40 moves for chess). So it is completely infeasible to just try all possible moves by brute force. - With chess, it is not too hard to evaluate who is winning in a particular position, by looking at which pieces they have left and where the pieces are on the board; with Go, on the other hand, evaluating who is winning can be extremely difficult.

Up until just a few years ago, the best Go-playing programs could play at the level of a decent amateur player but could not come anywhere close to beating professional-level players. Most people thought that further improvements would be very difficult to achieve and that it would be another decade or two before computer programs could beat top human players. So AlphaGo came as quite a surprise, and is based on recent fundamental advances in machine learning techniques (which are already having lots of other cool applications).

It is particularly interesting that AlphaGo works in a much more “human-like” way than Deep Blue did. Deep Blue was able to win at chess essentially by being really fast at evaluating lots of potential moves—it could analyze hundreds of millions of positions per second—and by consulting giant tables of memorized positions. Chess playing programs are better than us at chess, yes, but as far as I know we haven’t particularly learned anything from them. AlphaGo, on the other hand, “learned” how to play go by studying thousands of human games and then improving by playing itself many, many times. It uses several “neural networks”—a machine learning technique which is ultimately modeled on the structure of the human brain—both to predict promoising moves to study and to evaluate board positions. Of course it also plays out many potential sequences of moves to evaluate them. So it plays using a combination of pattern recognition and speculatively playing out potential sequences of moves—which is exactly how humans play. The amazing thing is that AlphaGo has actually taught us new things about Go—on many occasions it has played moves that humans describe as surprising and beautiful. It has also played moves that the accepted wisdom said were bad moves, but AlphaGo showed how to make them work. One might expect that people in the Go world might feel a sense of loss upon being beaten by a computer program—but the feeling is actually quite the opposite, because of the beatiful way AlphaGo plays and how much it has taught us about the game.

]]>