In my previous post we proved that if shares a nontrivial common factor with
, then
, and this in turn proves that
is not prime (by Fermat’s Little Theorem).
But wait a minute, this is silly: if shares a common factor with
, then we don’t need anything as complex as Fermat’s Little Theorem to figure out that
is not prime! All we have to do is compute
using the Euclidean Algorithm, and when we find out that the result isn’t
, we can immediately conclude that
isn’t prime since it has a nontrivial divisor.
So for comparison, let’s consider this (terrible!) primality test, call it the GCD test: given an we want to test, pick values of
such that
and compute
for each one. If we find an
for which
, then
is not prime.
This is terrible for several reasons. First, if we want this test to tell us for certain when is prime, we essentially have to try all the possible values of
. We can optimize a bit by only trying
—if
has any nontrivial divisors we will be sure to find some that are under
—but this doesn’t help all that much in the grand scheme of things. In fact, this GCD test is actually almost the same thing as trial division, where we try a bunch of different numbers to see whether they evenly divide
. Both are essentially trying to find divisors of
by brute-force search.
So suppose instead that we are willing to live with some uncertainty, and we just try some fixed number of values for , and either conclude with certainty that
is composite (if we happen to find an
that shares a nontrivial factor with
), or report that it is probably prime. How bad can this be—or put another way, how many values of
do we have to try so we can be “reasonably certain” that
really is prime when the test says it is?
The answer is that it can be very bad indeed. Euler’s totient function counts the number of integers
which are relatively prime to
. So if
is composite and we pick
uniformly at random, there are
choices which won’t reveal the fact that
is composite, and approximately
choices which do share a common factor with
and hence do reveal the fact that it is composite.
So the question is, how big can be, relative to
? We would like it to be small—which would leave us with many opportunities to learn that
is composite—but in fact it can be quite big. For example,
, which is not much smaller than
itself. This means that
of the numbers less than
share no factors in common with
; only
of them share a factor. In fact,
is big precisely when
has just a few large prime factors, which intuitively is exactly when
is most difficult to factor. For such
there really is no acceptable number of
’s we can test with the GCD test in order to be reasonably sure that
is prime—in the case of
, for example, each
we randomly pick has only a
chance of sharing a common factor with
; and this can get arbitrarily bad as
gets larger.
So remember we started down this path by showing that if we use the Fermat primality test, values of which share a common factor with
will definitely reveal the compositeness of
. But now we know that if we rely on only such values of
, not only do we have a very small chance of discovering composite
—no better than just using trial division—but more than that, actually using the Fermat test itself would be silly, since we should just use the simpler GCD test instead!
So if the Fermat primality test is worthwhile at all, it must be because there are other values of , which don’t share any common factors with
, but nonetheless still reveal the fact that
is composite since
. So how many of those values of
are there? Stay tuned!
Pingback: Fermat witnesses and liars (some words on PWW #24) | The Math Less Traveled