Fermat witnesses and liars (some words on PWW #24)

Let n be a positive integer we want to test for primality, and suppose a is some other positive integer with a < n. There are then four possibilities:

  • a and n could share a common factor. In this case we can find the common factor using the Euclidean Algorithm, and n is definitely not prime. (See my previous post.)

  • On the other hand, a and n might not share a prime factor, that is, they might be relatively prime (or put yet another way, their GCD might be 1). This case breaks down into three subcases:

    • a^{n-1} \not \equiv 1 \pmod n. In this case we know by (the contrapositive of) Fermat’s Little Theorem that n is definitely composite (although we don’t learn anything about its factorization); a is called a Fermat witness for n.
    • n is prime. In this case, Fermat’s Little Theorem tells us that a^{n-1} must be equivalent to 1 modulo n. However, computing a^{n-1} doesn’t necessarily tell us much, because the next case is also possible:
    • a^{n-1} \equiv 1 \pmod n but n is composite. In this case a is called a Fermat liar for n.

The question becomes: for a given composite n, how many Fermat liars and Fermat witnesses could there be? How many values of a do we have to test to be “reasonably sure” whether a is prime or composite?

Each pixel in the image from my previous post represents a particular (a,n) pair. Each row represents a value of n, with the top row corresponding to n = 2 and the bottom row to n = 600. Each column represents a value of a. Of course we only consider pairs (a,n) with a \leq n, which explains the triangular shape. (I include the case a = n to complete the triangle, although these are not interesting from a primality testing point of view.)

The four cases outlined above correspond to the four colors:

  • If a and n share a common factor, the pixel is colored yellow.
  • If a is a Fermat witness for n, the pixel is green.
  • If n is prime, the pixel is blue.
  • If a is a Fermat liar for n, the pixel is red.

Here’s a much smaller sample of the same visualization so we can see more clearly what’s going on.

Primes of course yield blue stripes. Non-primes are stripes of yellow, green, and sometimes red. Testing a particular n to see whether it is prime corresponds to picking random squares from its row: if we hit a yellow or green square, we learn immediately that n is composite. If we hit a red square or a blue square, we aren’t sure. So far, however, things don’t look too bad: there are only a few isolated red squares, so picking two or three random values should be enough to ensure that we either find out n is composite, or can safely assume that it is prime. Let’s continue the picture a bit farther; here are the first 100 values of n:

I’ve also included some purple bars to the left of each row, showing what proportion of the numbers relatively prime to n are Fermat liars. So far, none of the purple bars extend beyond the halfway mark (the first vertical line corresponds to half, and the second, slightly darker vertical line corresponds to 1). It turns out there’s a good reason for this:

Theorem: if n is composite and there exists at least one Fermat witness for n, then at least half of the numbers relatively prime to n are Fermat witnesses.

Some questions for you:

  1. Can you prove this? The proof is not hard given the right idea.
  2. Suppose that for every composite n, at least half the relatively prime values of a are Fermat witnesses. If we pick k random values of a < n and find that a^{n-1} \equiv 1 \pmod n for all of them, what can we say about the probability that n is prime?
  3. The theorem has that pesky condition, “if there exists at least one Fermat witness for n”… do you think there could be composite values of n with no Fermat witnesses? (Hint: go back and look at the picture from my previous post…)
Posted in computation, number theory, posts without words, primes | Tagged , , , , | 1 Comment

Post without words #24

Image | Posted on by | Tagged , , , | 5 Comments

The Fermat primality test and the GCD test

In my previous post we proved that if a shares a nontrivial common factor with n, then a^{n-1} \not\equiv 1 \pmod n, and this in turn proves that n is not prime (by Fermat’s Little Theorem).

But wait a minute, this is silly: if a shares a common factor with n, then we don’t need anything as complex as Fermat’s Little Theorem to figure out that n is not prime! All we have to do is compute \gcd(a,n) using the Euclidean Algorithm, and when we find out that the result isn’t 1, we can immediately conclude that n isn’t prime since it has a nontrivial divisor.

So for comparison, let’s consider this (terrible!) primality test, call it the GCD test: given an n we want to test, pick values of a such that 1 < a < n and compute \gcd(a,n) for each one. If we find an a for which \gcd(a,n) \neq 1, then n is not prime.

This is terrible for several reasons. First, if we want this test to tell us for certain when n is prime, we essentially have to try all the possible values of a. We can optimize a bit by only trying 1 < a \leq \sqrt{n}—if n has any nontrivial divisors we will be sure to find some that are under \sqrt{n}—but this doesn’t help all that much in the grand scheme of things. In fact, this GCD test is actually almost the same thing as trial division, where we try a bunch of different numbers to see whether they evenly divide n. Both are essentially trying to find divisors of n by brute-force search.

So suppose instead that we are willing to live with some uncertainty, and we just try some fixed number of values for a, and either conclude with certainty that n is composite (if we happen to find an a that shares a nontrivial factor with n), or report that it is probably prime. How bad can this be—or put another way, how many values of a do we have to try so we can be “reasonably certain” that n really is prime when the test says it is?

The answer is that it can be very bad indeed. Euler’s totient function \varphi(n) counts the number of integers \leq n which are relatively prime to n. So if n is composite and we pick 1 < a < n uniformly at random, there are \varphi(n) choices which won’t reveal the fact that n is composite, and approximately n - \varphi(n) choices which do share a common factor with n and hence do reveal the fact that it is composite.

So the question is, how big can \varphi(n) be, relative to n? We would like it to be small—which would leave us with many opportunities to learn that n is composite—but in fact it can be quite big. For example, \varphi(961) = \varphi(31^2) = 31\cdot 30 = 930, which is not much smaller than 961 itself. This means that 930 of the numbers less than 961 share no factors in common with 961; only 31 of them share a factor. In fact, \varphi(n) is big precisely when n has just a few large prime factors, which intuitively is exactly when n is most difficult to factor. For such n there really is no acceptable number of a’s we can test with the GCD test in order to be reasonably sure that n is prime—in the case of 961, for example, each a we randomly pick has only a 31 / 961 \approx 3\% chance of sharing a common factor with n; and this can get arbitrarily bad as n gets larger.

So remember we started down this path by showing that if we use the Fermat primality test, values of a which share a common factor with n will definitely reveal the compositeness of n. But now we know that if we rely on only such values of a, not only do we have a very small chance of discovering composite n—no better than just using trial division—but more than that, actually using the Fermat test itself would be silly, since we should just use the simpler GCD test instead!

So if the Fermat primality test is worthwhile at all, it must be because there are other values of a, which don’t share any common factors with n, but nonetheless still reveal the fact that n is composite since a^{n-1} \not\equiv 1 \pmod n. So how many of those values of a are there? Stay tuned!

Posted in computation, number theory, primes | Tagged , , , | 1 Comment

Making the Fermat primality test deterministic

Let’s recall Fermat’s Little Theorem:

If p is prime and a is an integer where 0 < a < p, then a^{p-1} \equiv 1 \pmod p.

Recall that we can turn this directly into a test for primality, called the Fermat primality test, as follows: given some number n that we want to test for primality, pick an integer a between 0 and n (say, at random), and compute a^{n-1} \pmod n. If the result is not equal to 1, then Fermat’s Little Theorem tells us that n is definitely not prime. Repeat some fixed number of times k. If we get 1 every time, then report that n is probably prime.

Making Fermat Deterministic

This “probably prime” business is a little annoying. Can we somehow turn this into a deterministic primality test? Well, for one thing, instead of just picking k random values, what if we tested all 1 < a < n-1? (Yes, of course this would take way too long, but bear with me for a bit!) If they all yield 1 \pmod n when raised to the (n-1) power, could n still possibly be composite, or could we conclude it is definitely prime? Put another way, are there any composite numbers n such that a^{n-1} \equiv 1 \pmod n for all 0 < a < p?

It turns out that this would work: there do not exist composite numbers such that a^{n-1} \equiv 1 \pmod n for all 0 < a < p, so if we test all possible values of a and we always get 1, then we can conclude with certainty that n is prime. Let’s prove it!

Suppose n is composite, and let 0 < a < p be some number which shares a nontrivial common divisor g with n, that is, g = \gcd(a,n) \neq 1. If n is composite then such an a must exist; for example, we could just take a to be one of n’s prime divisors. Now, I claim that a^{n-1} can’t possibly be equivalent to 1 \pmod n. Let r be the remainder when dividing a^{n-1} by n, that is, a^{n-1} \equiv r \pmod n. Rearranging gives a^{n-1} - r \equiv 0 \pmod n, which means that a^{n-1} - r is divisible by n, that is, a^{n-1} - r = qn for some integer q. Rearranging this again, we get a^{n-1} - qn = r. But by our assumption, n and a are both divisible by g, and hence a^{n-1} - qn must be divisible by g—but that means r must be divisible by g as well. Since we assumed that a and n have a nontrivial common factor, that is, g \neq 1, we conclude that r \neq 1 too, that is, a^{n-1} \not\equiv 1 \pmod n.

So we now know that any number a which shares a common factor with n will definitely reveal the fact that n is composite. How many (or how few) such a could there be? And what about other values of a, which don’t share a factor with n—how many of them might also reveal the fact that n is composite? We’ll tackle these questions in my next post!

Posted in computation, number theory, primes | Tagged , , , | 1 Comment

The wizard’s rational puzzle (solutions, part 2)

At long last, here is the solution I had in mind for the Wizard’s rational puzzle. Recall that the goal is to figure out the numerator and denominator of a secret rational number, if all we are allowed to do is reciprocate, add, subtract, multiply, and compare.

  • Make sure the rational is greater than 1 (reciprocating it if necessary).
  • Repeatedly subtract 1 from it until it becomes less than 1.
  • At this point if it is equal to 0 you can stop; otherwise, reciprocate it (making the result greater than 1 again) and repeat the whole process.

Given the exact sequence of operations you did to reduce the rational to zero, you can easily reverse them to reconstruct the original rational. For example, suppose you had to subtract 1 three times, then reciprocate, then subtract 1 five times, reciprocate, then subtract 1 two more times. This gives us the equation

\displaystyle\frac{1}{\frac{1}{r - 3} - 5} - 2 = 0

or, inverting everything to solve for r,

\displaystyle r = 3 + \frac{1}{5 + \frac{1}{2}} = \frac{35}{11}.

This clearly gives the right answer; the only question is whether it will stop after a finite amount of time. But it does, since every time we subtract 1 we are making the numerator smaller without changing the denominator, and reciprocating just switches the numerator and denominator without making them bigger. Also, if we had a number less than 1 and reciprocate it, the result will definitely be bigger than 1, at which point we can subtract 1 from it at least once, and hence we cannot get stuck reciprocating repeatedly without doing any subtracting. Since we have two positive integers which are either staying the same or getting smaller on each step, the process must eventually stop.

There are several equivalent ways to think about what is going on here:

  • One way to think about this—made clear by the expression for r in the example above—is that we are building up the continued fraction expansion for r.

  • Another way to think about it is that we are running the Euclidean Algorithm on r’s numerator and denominator. Of course the Euclidean Algorithm is typically used to find the greatest common divisor of two numbers, and if we imagine r to be in lowest terms then of course the gcd of its numerator and denominator will be 1, so finding the gcd doesn’t tell us anything in and of itself. The point is, however, that the exact sequence of steps taken by the Euclidean Algorithm is unique for each relatively prime pair (p,q), and so we can use the sequence of steps to reverse-engineer what p and q must have been in the first place.

  • Yet another way, closely related to the previous, is to imagine that we are finding our way up the Calkin-Wilf tree step by step. Recall that the Calkin-Wilf tree is the infinite binary tree of positive rationals where the root is 1 and each node a/b has a/(a+b) as its left child and (a+b)/b as its right child. Each rational appears at a unique location in the tree, so the sequence of upwards steps taken from r to the root allows us to reconstruct the original r.

    More specifically, if we have a number bigger than 1, it must be of the form (a+b)/b for some a and b, i.e. it is a right child in the Calkin-Wilf tree. Subtracting one from (a+b)/b yields a/b, so it corresponds to taking one step up and to the left in the Calkin-Wilf tree. When we reach a number less than 1, it means that node is a left child, so we cannot take another step up and to the left. Reciprocating corresponds to mirroring the entire tree left-right, and then we can continue taking steps up and to the left.

So how long does this take? Moving up one level in the Calkin-Wilf tree takes at worst 4 operations: one subtraction, two comparisons, and a reciprocate. (We have to do two comparisons because when we find out that a number is less than one, we have to also check if it is greater than 0 to see whether we should reciprocate or stop.) On the other hand, the best case takes only two operations (a subtraction and a comparison, if the result is still greater than one). The worst case for a given depth in the C-W tree would be if we start with the ratio of two consecutive Fibonacci numbers, since we would have to reciprocate after doing only one subtraction at every single step (try it!).

There’s also another fun thing we can do to speed things up. Instead of just subtracting 1 repeatedly, we can use a sort of binary search instead. That is, we can first create cubes containing powers of two (in practice we can just create them on the fly as needed). Then given a number r > 1, we first find the smallest power of two such that r - 2^k is less 1 (by computing r - 1, then r - 2, then r - 4, then r - 8, … and checking each time until we find the first 2^k such that r - 2^k < 1), and then we do a binary search on the interval [r - 2^k, r - 2^{k-1}] to find the smallest m such that r - m < 1 (in practice this just means adding or subtracting the next smaller power of two at each step, depending on whether the previous step was less than or greater than 1). Not counting the operations needed to create the cubes with the powers of 2 in the first place (since we can reuse them many times, and in any case it takes only one operation per power of two), this would take about 2 \log_2 r addition and subtraction operations. One might worry that this would be slightly slower for small values of n; however, I think (but have not fully worked out the details) that this will actually never require more operations than the naive subtraction method; I will leave this as an exercise. Of course, for larger r this is clearly going to be a big win, since 2 \log_2 r is much smaller than r.

Of course, if the wizard had provided a machine that could perform a “floor” operation, we could make this even more efficient: instead of subtracting 1 until finding a result less than 1, we could just compute r - \lfloor r \rfloor. This is like being able to jump as far up and to the left as possible in the Calkin-Wilf tree using only two operations. (Unsurprisingly, the floor function plays a key role in the algorithm for generating the Calkin-Wilf sequence.) I actually had this in the original version of the puzzle, but took it out when I realized that it was not necessary, and slightly more interesting to do without!

Several commenters mentioned using the Stern-Brocot tree to search for the secret rational. It’s probably a topic for another blog post, but briefly, the idea is to keep track of four integers p_1, q_1, p_2, and q_2 representing the rational numbers p_1/q_1 and p_2/q_2. We start with p_1 = 0 and q_1 = 1 (representing 0/1 = 0) and p_2 = 1, q_2 = 0 (1/0, representing “infinity”). We maintain the invariant that p_1/q_1 < r < p_2/q_2, that is, we maintain p_1/q_1 and p_2/q_2 as the endpoints of an interval that contains the secret rational r. At each step we compute the mediant a/b = (p_1 + p_2)/(q_1 + q_2), which is guaranteed to lie in between p_1/q_1 and p_2/q_2 (exercise: prove this!), and check whether it is equal to the secret rational. If not, we either set p_1/q_1 = a/b or p_2/q_2 = a/b, depending on whether the secret rational is greater or less than a/b, respectively. Unlike a simple binary search (which can only find rationals with a denominator that is a power of two in finite time), this is guaranteed to terminate in a finite amount of time; every rational can be obtained after a finite number of successive steps of taking the mediant.

So how long does it take? It turns out that the Stern-Brocot tree and the Calkin-Wilf tree have all the same rationals on each level, but in a different order1, so the two methods are easy to compare. The proposed Stern-Brocot method moves down the tree one level each step, and it needs four five operations for each step (two additions to form the mediant, one division to turn the pair of integers representing the mediant into a rational number, and then two comparisons to find out whether the mediant is less than, equal to, or greater than the target number). Unless there is a more clever way to do this that I missed, it seems my method is a clear win: for a rational on level k of the Stern-Brocot or Calkin-Wilf trees, the iterated mediant algorithm always needs exactly 5k operations, whereas for my algorithm 4k is only a worst case (for a ratio of consecutive Fibonacci numbers), but in many cases it does better (much better for rationals with large entries in their continued fraction expansion, which we can skip past in logarithmic time).

  1. Each is a bit-reversal permutation of the other.

Posted in arithmetic, challenges, logic, programming, puzzles, solutions | Tagged , , , , , , , , , ,

Post without words #23

Image | Posted on by | Tagged , , , | 11 Comments

The wizard’s rational puzzle (solutions, part 1)

About two and a half months ago I posted a challenge involving a sadistic math wizard, metal cubes containing rational numbers, and a room full of strange machines. I’ve been remiss in following up with some solutions. (Go read the first post and think about it before reading on if you haven’t already!)

I had fun creating an elaborate setup to frame the puzzle, but as you probably figured out, really the puzzle comes down to this: is it possible to figure out the numerator and denominator of an unknown positive rational number, if you are only allowed to take reciprocals, add, subtract, multiply, and test whether one number is less than another?

There are many ways to solve this. First, a few preliminary ideas which many commenters picked up on:

  • The reciprocal of the reciprocal of x is x again—that is, 1/(1/x) = x—so we can use the reciprocator machine as a copying machine, in order to create multiple cubes with the same value.
  • We can also multiply any x by its reciprocal to create x \cdot 1/x = 1, that is, we can make a cube which we know contains the number 1. We can also make a cube containing 0 by making a copy of any number x and then subtracting it from itself.
  • We can also use the reciprocator machine, in conjunction with multiplication, to implement division: to compute x/y we reciprocate y and then multiply the result by x, since x/y = x \cdot 1/y.
  • We can use copies of the number 1, along with operations like addition and multiplication, to create a cube containing any positive integer we want. (We can also use subtraction to create any negative integer.) We can then divide such integers to create any rational number we want.
  • We can test whether any two cubes contain equal numbers by putting them into the less-than comparator twice, once in each order. If the machine says “true” one way and “false” the other way, then one number is really less than the other. If it says “false” both ways, then the numbers must be equal, since this is the only situation in which both x < y and y < x are false.1

Given these preliminaries, the simplest method, put forth originally by Eric Burgess, is to methodically list all the positive rational numbers (creating each one by building up its numerator and denominator out of copies of the number 1) and test each one to see whether it is equal to the wizard’s number. Since the rational numbers are countable, we can list them in such a way that every rational number appears at a finite position. This means that no matter what the wizard’s number is, after some finite amount of time we will encounter it in our list. (One very nice such list we might consider is the Calkin-Wilf order 1, 1/2, 2, 1/3, 3/2, 2/3, 3, 1/4, 4/3, …)

However, this could of course take a long time. For example, suppose the wizard has chosen 1397/5331. It turns out that this is the 4,285,192nd rational in the Calkin-Wilf order. Even if we can construct and test one rational number every 20 minutes (which actually seems overly optimistic, more on this later), it would still take us over 160 years (with no breaks for sleeping or eating, and did you notice the wizard didn’t seem to provide a bathroom for us to use?). So we have to ask: is there a faster way?

  1. If the machine says “true” both ways, then you should probably start thinking about deconstructing the machines and using the parts to build some kind of weapon or clever escape device.

Posted in arithmetic, challenges, logic, programming, puzzles, solutions | Tagged , , , , , , , , | 3 Comments