Let’s start with the statement that looks the least general:

If is prime and is an integer where , then .

(Recall that means that and have the same remainder when you divide them by .) For example, is prime, and we can check that for each , if you raise to the th power, you get a number which is one more than a multiple of :

Here’s a second variant of the theorem that looks slightly more general than the first:

If is a prime and is any integer not divisible by , then .

This looks more general because can be *any* integer not divisible by , not just an integer between and . As an example, let . Then .

We can see that (2) is more general than (1), since if then it is certainly the case that is not divisible by . Hence (2) implies (1). But actually, it turns out that (1) implies (2) as well!

Here’s a proof: let’s assume (1) and use it to show (2). In order to show (2), we have to show that whenever is prime and is any integer not divisible by . So let be an arbitrary prime and an arbitary integer not divisible by . Then by the Euclidean division theorem, we can write in the form , where is the quotient when dividing by , and is the remainder. can’t actually be , since we assumed is not divisible by . Hence , so (1) applies and we can conclude that . But notice that (since is more than a multiple of ), and hence as well.

So although (2) “looks” more general than (1), the two statements are in fact logically equivalent.

Here’s another version which seems to be yet more general, since it drops the restriction that can’t be divisible by :

If is prime and is any integer, then .

Notice, however, that the conclusion is different: rather than .

As an example, let and again. Then , that is, the remainder of when divided by is . As another example, if , then since both are divisible by .

Once again, although this seems more general, it turns out to be equivalent to (1) and (2).

First of all, to see that (2) implies (3), suppose is prime and any integer. If is divisible by , then and clearly . On the other hand, if is not divisible by , then (2) applies and we may conclude that ; multiplying both sides of this equation by yields .

Now, to see that (3) implies (2), let be a prime and any integer not divisible by . Then (3) says that ; we wish to show that . However, since is not divisible by we know that has a multiplicative inverse , that is, there is some such that . (I have written about this fact before; it is a consequence of Bézout’s Identity.) If we take and multiply both sides by , we get to cancel one from each side, yielding as desired.

The final form is the most general yet: it even drops the restriction that be prime.

If and is any integer, then .

where is the Euler totient function, *i.e.* the number of positive integers less than which are relatively prime to . For example, since there are four positive integers less than which have no factors in common with : namely, , , , and .

We can see that (4) implies (2), since when is prime, (since *every* integer in is relatively prime to ). None of (1), (2), or (3) directly imply (4)—so it *is*, in fact, a bit more general—but we can generalize some of the proofs of these other facts to prove (4).

]]>

Yes, there is a new small human in my house! So I won’t be writing here regularly for the near future, but do hope to still write occasionally as the mood and opportunity strikes.

Recently I realized that I really didn’t know much of anything about fast primality testing algorithms. Of course, I have written about the Lucas-Lehmer test, but that is a special-purpose algorithm for testing primality of numbers with a very special form. So I have learned about a few general-purpose primality tests, including the Rabin-Miller test and the Baille-PSW test. It turns out they are really fascinating, and not as hard to understand as I was expecting. So I may spend some time writing about them here.

As a first step in that direction, here is (one version of) *Fermat’s Little Theorem (FLT)*:

Let be a prime and some positive integer not divisible by . Then that is, is one more than a multiple of .

Have you seen this theorem before? If not, play around with some small examples to see if you believe it and why you think it might be true. If you have seen it before, do you remember a proof? Or can you come up with one? (No peeking!) There are many beautiful proofs; I will write about a few.

]]>

…and this picture of primitive roots I made a year ago:

At first I didn’t see the connection, but Snowball was absolutely right. Once I understood it, I made this little animation to illustrate the connection more clearly:

(Some of the colors flicker a bit; I’m not sure why.)

]]>

I learned from Lucas A. Brown that this is sometimes known as “Euclid’s Orchard”. Imagine that there is a tall, straight tree growing from each grid point other than the origin. If you stand at the origin, then the trees you can see are exactly those at grid points with . This is because if a tree is at for some , then it is blocked from your sight by the tree at : both lie exactly along the line from the origin with slope . But if a tree is at some point with relatively prime coordinates , then it will be the first thing you see when you look along the line with slope exactly .

(…well, actually, all of the above is only really true if we assume the trees are infinitely skinny! Otherwise trees will end up blocking other trees which are almost, but not quite, in line with them. So try not to breathe while standing at the origin, OK? You might knock over some of the infinitely skinny trees.)

Here’s the portion of the grid surrounding the origin, with the lines of sight drawn in along with the trees you can’t see because they are exactly in line with some closer tree. (I’ve made the trees skinny enough so that they don’t accidentally block any other lines of sight—but if we expanded the grid we’d have to make the trees even skinner.)

Now, what about the colors of the dots? Commenter Snowball guessed this correctly: each point is colored according to the number of steps needed for the Euclidean algorithm needed to reach 1. Darker colors correspond to more steps. It is interesting to note that there seems to be (eight symmetric copies of) one particularly dark radial stripe, indicated below:

In fact, the slope of this stripe is exactly ! This corresponds to the fact (first proved by Gabriel Lamé in 1844) *that consecutive Fibonacci numbers are worst-case inputs to the Euclidean algorithm*—that is, it takes more steps for the Euclidean algorithm to compute than for any other inputs of equal or smaller magnitude. Since the ratio of consecutive Fibonacci numbers tends to , the dots with the darkest color relative to their neighbors all lie approximately along the line with slope . What’s interesting to me is that lots of other dots that lie close to this line are also relatively dark. Why does this happen?

]]>

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

However, the solution depended on having the clever idea to add . But there are other ways to come to similar conclusions, and in fact this is not the way I originally solved it.

The first thing I did when attacking the problem was to work out some small powers of by hand:

and so on. It quickly becomes clear (if you have not already seen this kind of thing before) that will always be of the form . Let’s define and to be the coefficients of the th power of , that is, . Now the natural question is to wonder what, if anything, can we say about the coefficients and ? Quite a lot, as it turns out!

We can start by working out what happens when we multiply by another copy of :

But by definition, so this means that and . As for base cases, we also know that , so and . From this point it is easy to quickly make a table of some of the values of and :

Each entry in the column is the sum of the and from the previous row; each is the sum of the previous and twice the previous . You might enjoy playing around with these sequences to see if you notice any patterns. It turns out that there is an equivalent way to define the and separately, such that each only depends on previous values of , and likewise each only depends on previous . I’ll explain how to do that next time, but leave it as a challenge for you in the meantime!

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

In my previous post, we computed for some small and conjectured that the answer is , since these powers seem to be alternately just under and just over an integer. Today, I’ll explain a clever solution, which I learned from Colin Wright (several commenters also posted similar approaches).

First, let’s think about expanding using the Binomial Theorem:

We get a sum of powers of with various coefficients. Notice that when is raised to an *even* power, we get an integer: , , and so on. The odd powers give us irrational things. So if we could find some way to “cancel out” the odd, irrational powers, we would be left with a sum of a bunch of integers.

Here is where we can pull a clever trick: consider . If we expand it by the Binomial Theorem, we find

but this is the same as the expansion of , with alternating signs: the odd terms—which are exactly the irrational ones—are negative, and the even terms are positive. So if we add these two expressions, the odd terms will cancel out, leaving us with two copies of all the even terms:

For now, we don’t care about the value of the sum on the right—the important thing to note is that it is an integer, since it is a sum of integers multiplied by *even* powers of , which are just powers of two.

We are almost done. Notice that , so . Since this has an absolute value less than , its powers will get increasingly close to zero; since it is negative, its powers will alternate between being positive and negative. Hence,

is an integer, and is very small, so must be very close to that integer. When is even, is positive, so must be slightly less than an integer; conversely, when is odd we conclude that is slightly greater than an integer.

To complete the solution to this particular problem, we have to make sure that is *small enough* that we can say for sure the 99th digit after the decimal point of is still 9. That is, we need to prove that, say, . This will be true if we can show that (just raise both sides to the th power), and in turn, taking the base 10 logarithm of both sides, this will be true if . At this point we can simply confirm by computation that . The fact that we get means that not just 99, but actually the first 191 digits after the decimal point of are 9. (It turns out that the 192nd digit is a .)

The rabbit hole goes much deeper than this, however!

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

Let’s play around with this a bit and see if we notice any patterns. First, itself is approximately

so its powers are going to get large. Let’s use a computer to find the first ten or so:

Sure enough, these are getting big (the tenth power is already bigger than ), but look what’s happening to the part after the decimal: curiously it seems that the powers of are getting rather close to being integers! For example, is just under , only about away.

At this point, I had seen enough to notice and conjecture the following patterns (and I hope you have too):

- The powers of seem to be getting closer and closer to integers.
- In particular, they seem to alternate between being just
*under*an integer (for even powers) and just*over*an integer (for odd powers).

If this is true, the decimal expansion of must be of the form for some big integer and some number of s after the decimal point. And it seems reasonable that if Colin is posing this question, it must have more than 99 nines, which means the answer would be 9.

But *why* does this happen? Do the powers really keep alternating being just over and under an integer? And how close do they get—how do we know for sure that is close enough to an integer that the 99th digit will be a 9? This is what I want to explore in a series of future posts—and as should come as no surprise it will take us on a tour of some fascinating mathematics!

]]>

What’s the 99th digit to the right of the decimal point in the decimal expansion of ?

Of course, it’s simple enough to use a computer to find the answer; any language or software system that can compute with arbitrary-precision real numbers can find the correct answer in a fraction of a second. But that’s obviously not the point! Can we use logical reasoning to *deduce* or *prove* the correct answer, without doing lots of computation? Even if we find the answer computationally, can we explain *why* it is the right answer? Solving this puzzle took me down a fascinating rabbit hole that I’d like to share with you over the next post or three or eight.

For the moment I’ll just let you think about the puzzle. Although using a computer to simply compute the answer is cheating, I do encourage the use of a computer or calculator to try smaller examples and look for patterns. It is not too hard to see a pattern and conjecture the right answer; the interesting part, of course, is to figure out why this pattern happens, and to prove that it continues.

]]>

]]>