Fermat’s Little Theorem: proof by modular arithmetic

In a previous post I explained four (mostly) equivalent statements of Fermat’s Little Theorem (which I will abbreviate “FlT”—not “FLT” since that usually refers to Fermat’s Last Theorem, whose proof I am definitely not qualified to write about!).

Today I want to present the first proof of FlT. We’re going to prove statement (2), that is,

If p is a prime and a is any integer not divisible by p, then a^{p-1} \equiv 1 \pmod p.

We already saw that statements (1), (2), and (3) are logically equivalent, so proving (2) is sufficient to prove all of them. (As I’ll show in a future post, we can also generalize this proof to prove the corrected version of statement (4).)

So, suppose p is a prime, and a is any integer not divisible by p. Now consider the set of multiples of a up to (p-1)a:

\{a, 2a, 3a, \dots, (p-1)a \}.

However, we will consider not the multiples of a themselves but their remainders when divided by p. As an example, suppose p = 7 and a = 3. Then we want to look at multiples of a: 3, 6, 9, 12, 15, 18—and then take their remainders \pmod 7. As you can check, this yields the set

\{3, 6, 2, 5, 1, 4\}.

As another example, suppose p = 13 and a = 20. Then the multiples of a, considered \pmod {13}, are 20 \equiv 7, 40 \equiv 1, 60 \equiv 8, and so on, ultimately yielding the set of remainders (which you can again check):

\{7,1,8,2,9,3,10,4,11,5,12,6\}

Have you noticed anything about the previous examples? It looks like every possible remainder (other than 0) shows up exactly once, (though obviously not in order). Will this always be true?

The fact that 0 doesn’t show up is no mystery: we specified that a is not divisible by p, and in that case none of a, 2a, \dots, (p-1)a will be divisible by p either, so none of them have a remainder of 0. But why would their remainders all be different?

Suppose there are two multiples of a, say, ja and ka, which have the same remainder when divided by p. We can write this as ja \equiv ka \pmod p. Subtracting ka from both sides and factoring out a, we find that a(j-k) \equiv 0 \pmod p, that is, a(j-k) is divisible by p. Well, when a prime divides a product of two things, it must divide one or the other (or both). But we already assumed a is not divisible by p. Hence p must evenly divide j-k. But j and k are both less than p, so their difference must lie strictly between -p and p. The only multiple of p strictly between -p and p is zero, so j - k = 0, that is, j = k. So the only way to have ja \equiv ka \pmod p is if j = k. Put the other way around, we’ve shown that if j \neq k then ja and ka don’t have the same remainder \pmod p. So this proves that all the multiples of a from a up to (p-1)a have different remainders when divided by p.

Finally, since there are exactly p-1 multiples of a in our set, and p-1 possible nonzero remainders \pmod p, and all the remainders have to be different, we conclude that each remainder shows up exactly once.

So what? Here comes the clever trick: what happens if we take all those multiples of a and multiply them all together, and then take the remainder \pmod p? Since taking remainders commutes with multiplication (that is, ab \bmod p \equiv (a \bmod p)(b \bmod p)), this is the same as if we first take their remainders and then multiply those. But we already know that the remainders will contain each number from 1 to (p-1) exactly once—and if we’re multiplying them then the order doesn’t matter. So,

a \cdot 2a \cdot 3a \cdot \dots \cdot (p-1)a \equiv 1 \cdot 2 \cdot 3 \cdot \dots \cdot (p-1) \pmod p,

that is, the product of all the multiples of a has the same remainder as the factorial of (p-1) when divided by p. For example, looking at the example of p = 7 and a = 3 again, the product of the multiples of a is 3 \cdot 6 \cdot 9 \cdot 12 \cdot 15 \cdot 18 = 524880, whereas 6! = 720; but both have a remainder of 6 when divided by p = 7.

Now, we can factor the (p-1) copies of a out of the left side, and we are left with

a^{p-1} (p-1)! \equiv (p-1)! \pmod p

Now we just want to cancel (p-1)! from both sides—though we have to be a little careful when dividing both sides of a modular equation. In general it’s only valid when the thing you want to divide by is relatively prime to the modulus (this same issue came up in my previous post). But that is indeed the case here: (p-1)! is not divisible by p (since p is prime and (p-1)! is the product of a bunch of things that are all smaller than p). So we are justified in dividing both sides by it, and this completes the proof:

a^{p-1} \equiv 1 \pmod p.

Advertisements
Posted in number theory, primes, proof | Tagged , , | 7 Comments

Four formats for Fermat: correction!

In my previous post I explained three variants on Fermat’s Little Theorem, as well as a fourth, slightly more general variant, which it turns out is often called Euler’s Totient Theorem. Here’s what I said:

If n \geq 1 and a is any integer, then a^{\varphi(n)} \equiv 1 \pmod n.

However, this is wrong! We can easily find a counterexample. For example, let n = 6 and a = 3. Then \varphi(6) = 2, since there are only two numbers less than 6 that are relatively prime to it (namely, 1 and 5; each of 2, 3, and 4 shares a common factor with 6). But then a^{\varphi(n)} = 3^2 = 9 which has a remainder of 3, not 1, when divided by 6.

What’s wrong? It turns out I missed a small but important restriction: this is only true when a is relatively prime to n. Note this is stronger than saying a isn’t divisible by n; it says they cannot share any common factors at all. For example, 9 is not divisible by 6, but they share 3 as a common factor, so this theorem does not apply (and indeed, 9^2 \equiv 3 \pmod 6, not 1).

For completeness, here’s a corrected statement of the theorem (I have also fixed the previous post):

If n \geq 1 and a is any integer relatively prime to n, then a^{\varphi(n)} \equiv 1 \pmod n.

So what’s an example where this theorem does work? Let’s keep n = 6 but try a = 5 instead: now a^{\varphi(6)} = 5^2 = 25 \equiv 1 \pmod 6.

In my next post I’ll present a proof of Fermat’s Little Theorem.

Posted in number theory, primes | Tagged , , , , , , | 2 Comments

Four formats for Fermat

In my previous post I mentioned Fermat’s Little Theorem, a beautiful, fundamental result in number theory that underlies lots of things like public-key cryptography and primality testing. (It’s called “little” to distinguish it from his (in)famous Last Theorem.) There are several different forms in which it is commonly presented, so I wanted to start by introducing them and showing how they are related.

Statement 1

Let’s start with the statement that looks the least general:

If p is prime and a is an integer where 0 < a < p, then a^{p-1} \equiv 1 \pmod p.

(Recall that x \equiv y \pmod p means that x and y have the same remainder when you divide them by p.) For example, 7 is prime, and we can check that for each a \in \{1, \dots, 6\}, if you raise a to the 6th power, you get a number which is one more than a multiple of 7:

\displaystyle \begin{array}{ccrcr} 1^6 &=& 1 &=& 0 \cdot 7 + 1 \\[0.2em] 2^6 &=& 64 &=& 9 \cdot 7 + 1 \\[0.2em] 3^6 &=& 729 &=& 104 \cdot 7 + 1 \\[0.2em] 4^6 &=& 4096 &=& 585 \cdot 7 + 1 \\[0.2em] 5^6 &=& 15625 &=& 2232 \cdot 7 + 1 \\[0.2em] 6^6 &=& 46656 &=& 6665 \cdot 7 + 1 \end{array}

Statement 2

Here’s a second variant of the theorem that looks slightly more general than the first:

If p is a prime and a is any integer not divisible by p, then a^{p-1} \equiv 1 \pmod p.

This looks more general because a can be any integer not divisible by p, not just an integer between 0 and p. As an example, let a = 10. Then 10^6 = 1000000 = 7 \cdot 142857 + 1.

We can see that (2) is more general than (1), since if 0 < a < p then it is certainly the case that is not divisible by p. Hence (2) implies (1). But actually, it turns out that (1) implies (2) as well!

Here’s a proof: let’s assume (1) and use it to show (2). In order to show (2), we have to show that a^{p-1} \equiv 1 \pmod p whenever p is prime and a is any integer not divisible by p. So let p be an arbitrary prime and a an arbitrary integer not divisible by p. Then by the Euclidean division theorem, we can write a in the form a = qp + b, where q is the quotient when dividing a by p, and 0 \leq b < p is the remainder. b can’t actually be 0, since we assumed a is not divisible by p. Hence 0 < b < p, so (1) applies and we can conclude that b^{p-1} \equiv 1 \pmod p. But notice that a \equiv b \pmod p (since a is b more than a multiple of p), and hence a^{p-1} \equiv b^{p-1} \equiv 1 \pmod p as well.

So although (2) “looks” more general than (1), the two statements are in fact logically equivalent.

Statement 3

Here’s another version which seems to be yet more general, since it drops the restriction that a can’t be divisible by p:

If p is prime and a is any integer, then a^p \equiv a \pmod p.

Notice, however, that the conclusion is different: a^p \equiv a \pmod p rather than a^{p-1} \equiv 1 \pmod p.

As an example, let p = 7 and a = 10 again. Then 10^7 = 10000000 = 1428570 \cdot 7 + 10, that is, the remainder of 10^7 when divided by 7 is 10. As another example, if a = 14, then 14^7 = 105413504 \equiv 14 \pmod 7 since both are divisible by 7.

Once again, although this seems more general, it turns out to be equivalent to (1) and (2).

First of all, to see that (2) implies (3), suppose p is prime and a any integer. If a is divisible by p, then a \equiv 0 \pmod p and clearly a^p \equiv 0 \equiv a \pmod p. On the other hand, if a is not divisible by p, then (2) applies and we may conclude that a^{p-1} \equiv 1 \pmod p; multiplying both sides of this equation by a yields a^p \equiv a \pmod p.

Now, to see that (3) implies (2), let p be a prime and a any integer not divisible by p. Then (3) says that a^p \equiv a \pmod p; we wish to show that a^{p-1} \equiv 1 \pmod p. However, since a is not divisible by p we know that a has a multiplicative inverse \pmod p, that is, there is some b such that ab \equiv 1 \pmod p. (I have written about this fact before; it is a consequence of Bézout’s Identity.) If we take a^p \equiv a \pmod p and multiply both sides by b, we get to cancel one a from each side, yielding a^{p-1} \equiv 1 \pmod p as desired.

Statement 4

The final form is the most general yet: it even drops the restriction that p be prime.

If n \geq 1 and a is any integer relatively prime to n, then a^{\varphi(n)} \equiv 1 \pmod n.

[The first version of this post accidentally omitted the phrase “relatively prime to n” from the above statement, rendering it false; here’s my later post explaining and correcting the error.]

\varphi(n) is the Euler totient function, i.e. the number of positive integers less than n which are relatively prime to n. For example, \varphi(12) = 4 since there are four positive integers less than n which have no factors in common with 12: namely, 1, 5, 7, and 11.

We can see that (4) implies (2), since when n = p is prime, \varphi(p) = p-1 (since every integer in \{1, \dots, p-1\} is relatively prime to p). None of (1), (2), or (3) directly imply (4)—so it is, in fact, a bit more general—but we can generalize some of the proofs of these other facts to prove (4).

Posted in number theory, primes, proof | Tagged , , | 1 Comment

New baby, and primality testing

I have neglected writing on this blog for a while, and here is why:

Yes, there is a new small human in my house! So I won’t be writing here regularly for the near future, but do hope to still write occasionally as the mood and opportunity strikes.

Recently I realized that I really didn’t know much of anything about fast primality testing algorithms. Of course, I have written about the Lucas-Lehmer test, but that is a special-purpose algorithm for testing primality of numbers with a very special form. So I have learned about a few general-purpose primality tests, including the Rabin-Miller test and the Baille-PSW test. It turns out they are really fascinating, and not as hard to understand as I was expecting. So I may spend some time writing about them here.

As a first step in that direction, here is (one version of) Fermat’s Little Theorem (FLT):

Let p be a prime and a some positive integer not divisible by p. Then a^{p-1} \equiv 1 \pmod p, that is, a^{p-1} is one more than a multiple of p.

Have you seen this theorem before? If not, play around with some small examples to see if you believe it and why you think it might be true. If you have seen it before, do you remember a proof? Or can you come up with one? (No peeking!) There are many beautiful proofs; I will write about a few.

Posted in meta, number theory, primes | Tagged , , , | 10 Comments

From primitive roots to Euclid’s orchard

Commenter Snowball pointed out the similarity between Euclid’s Orchard

…and this picture of primitive roots I made a year ago:

At first I didn’t see the connection, but Snowball was absolutely right. Once I understood it, I made this little animation to illustrate the connection more clearly:

(Some of the colors flicker a bit; I’m not sure why.)

Posted in pattern, pictures, posts without words | Tagged , , , , ,

A few words about PWW #20

A couple commenters quickly figured out what my previous post without words was about. The dots making up the image are at integer grid points (m,n), with the center at (0,0). There is a dot at (m,n) if and only if m and n are relatively prime, that is, \gcd(m,n) = 1. Here is a slightly smaller version so it’s easier to see what is going on:

I learned from Lucas A. Brown that this is sometimes known as “Euclid’s Orchard”. Imagine that there is a tall, straight tree growing from each grid point other than the origin. If you stand at the origin, then the trees you can see are exactly those at grid points (m,n) with \gcd(m,n) = 1. This is because if a tree is at (dm,dn) for some d > 1, then it is blocked from your sight by the tree at (m,n): both lie exactly along the line from the origin with slope n/m. But if a tree is at some point with relatively prime coordinates (m,n), then it will be the first thing you see when you look along the line with slope exactly n/m.

(…well, actually, all of the above is only really true if we assume the trees are infinitely skinny! Otherwise trees will end up blocking other trees which are almost, but not quite, in line with them. So try not to breathe while standing at the origin, OK? You might knock over some of the infinitely skinny trees.)

Here’s the 9 \times 9 portion of the grid surrounding the origin, with the lines of sight drawn in along with the trees you can’t see because they are exactly in line with some closer tree. (I’ve made the trees skinny enough so that they don’t accidentally block any other lines of sight—but if we expanded the grid we’d have to make the trees even skinner.)

Now, what about the colors of the dots? Commenter Snowball guessed this correctly: each point is colored according to the number of steps needed for the Euclidean algorithm needed to reach 1. Darker colors correspond to more steps. It is interesting to note that there seems to be (eight symmetric copies of) one particularly dark radial stripe, indicated below:

In fact, the slope of this stripe is exactly \varphi = (1 + \sqrt 5)/2! This corresponds to the fact (first proved by Gabriel Lamé in 1844) that consecutive Fibonacci numbers are worst-case inputs to the Euclidean algorithm—that is, it takes more steps for the Euclidean algorithm to compute \gcd(F_{n+1}, F_n) = 1 than for any other inputs of equal or smaller magnitude. Since the ratio of consecutive Fibonacci numbers tends to \varphi, the dots with the darkest color relative to their neighbors all lie approximately along the line with slope \varphi. What’s interesting to me is that lots of other dots that lie close to this line are also relatively dark. Why does this happen?

Posted in pattern, pictures, posts without words | Tagged , , | 10 Comments

Post without words #20

Image | Posted on by | Tagged , , | 7 Comments