Möbius inversion

In my last post we saw that \mu \ast \mathbf{1} = \varepsilon, that is, the Möbius function \mu is the inverse of \mathbf{1} with respect to Dirichlet convolution. This directly leads to an interesting principle called Möbius inversion.

Möbius inversion. Suppose g(n) is defined for n \geq 1 as the sum of another function f over all the divisors of n:

\displaystyle g(n) = \sum_{d \mid n} f(d).

Then we can “invert” this relationship to express f as a sum over g:

\displaystyle f(n) = \sum_{d \mid n} \mu(d) g(n/d).

The above is the “traditional” way of stating the principle, but I like to write it in this equivalent, more symmetric way:

\displaystyle f(n) = \sum_{ab = n} \mu(a) g(b)

where, as usual, the sum is over all factorizations of n into a product ab. This is the same as the previous equation, because ab = n if and only if a \mid n, in which case b = n/a. But now it becomes more clear that we are dealing with a Dirichlet convolution. Can you see how to prove this?

Proof. If we think in terms of Dirichlet convolution, the proof is almost trivial. Note first that saying g(n) = \sum_{d \mid n} f(n) is the same as saying g = \mathbf{1} \ast f (recall that taking the Dirichlet convolution with \mathbf{1} is the same as summing over all divisors). We can then take the Dirichlet convolution of both sides with \mu:

\displaystyle \begin{array}{rcl} \mu \ast g &=& \mu \ast (\mathbf{1} \ast f) \\ &=& (\mu \ast \mathbf{1}) \ast f \\ &=& \varepsilon \ast f \\ &=& f.\end{array}

In other words, Möbius inversion is really just saying that if f = \mathbf{1} \ast g, then \mu \ast f = g, because \mu is the inverse of \mathbf{1}.

Let’s try an example! Recall that \varphi denotes the Euler totient function, which counts how many positive integers less than or equal to n are relatively prime to n. For example, \varphi(5) = 4 since all the positive integers less than or equal to 5 (other than 5 itself) are relatively prime to it, and more generally \varphi(p) = p-1 for any prime p. As another example, \varphi(12) = 4 since the only positive integers between 1 and 11 which are relative prime to 12 are 1, 5, 7, and 11. In a previous post we considered a visual proof that

\displaystyle \sum_{d \mid n} \varphi(d) = n:

For example, as seen in the picture above, 12 = \varphi(1) + \varphi(2) + \varphi(3) + \varphi(4) + \varphi(6) + \varphi(12) = 1 + 1 + 2 + 2 + 2 + 4.

Now, this is in the perfect format to apply Möbius inversion: the sum over all divisors of the function g(n) = \varphi(n) is equal to the function f(n) = n. So we conclude that

\displaystyle \varphi(n) = \sum_{d \mid n} \mu(d) (n/d)

or, factoring out n,

\displaystyle \varphi(n) = n \sum_{d \mid n} \frac{\mu(d)}{d}

which is interesting since it expresses \varphi(n) as a fraction of n.

Let’s use this to compute \varphi(360). We have

\displaystyle \varphi(360) = \sum_{d \mid 360} \mu(d) (360/d).

The prime factorization of 360 is 2^3 \cdot 3^2 \cdot 5. We know that \mu(d) will be 0 for any divisor with any repeated factors, so those all cancel out; we only need to consider divisors which correspond to subsets of \{2,3,5\}. (n = 360 is the same example we used in our proof of the key property of the Möbius function.) So

\displaystyle \begin{array}{rcl} \varphi(360) &=& \mu(1) \cdot 360 \\ &+& \mu(2) \cdot 180 \\ &+& \mu(3) \cdot 120 \\ &+& \mu(5) \cdot 72 \\ &+& \mu(2\cdot 3) \cdot 60 \\ &+& \mu(2 \cdot 5) \cdot 36 \\ &+& \mu(3 \cdot 5) \cdot 24 \\ &+& \mu(2 \cdot 3 \cdot 5) \cdot 12 \end{array}

Remembering that \mu yields 1 for an even number of prime factors and -1 for an odd number, we can evaluate this as

\displaystyle \varphi(360) = 360 - 180 - 120 - 72 + 60 + 36 + 24 - 12 = 96.

(and we can use e.g. a computer to verify that this is correct). Note how this works: we start with 360 and then subtract out all the numbers divisible by 2 (all 180 of them), as well as those divisible by 3 (120) and 5 (72). But now we have subtracted some numbers multiple times, e.g. those which are divisible by both 2 and 3. So we add back in the numbers which are divisible by 2 \cdot 3 (60 of them), by 2 \cdot 5 (36), and by 3 \cdot 5 (24). Finally, we’ve now overcounted numbers which are divisible by all three, so we subtract off the 12 numbers which are divisible by 2 \cdot 3 \cdot 5. Egad! This seems really similar to PIE! Indeed, this is no accident: it turns out that the Principle of Inclusion-Exclusion is really just another kind of Möbius inversion. But that’s a subject for another post!

Posted in combinatorics, proof | Tagged , , , , | 3 Comments

Dirichlet convolution and the Möbius function

Recall from last time that the Dirichlet convolution of two functions f and g is written f \ast g and defined by:

\displaystyle (f \ast g)(n) = \sum_{ab = n} f(a) g(b)

where the sum is taken over all possible factorizations of n into a product ab of positive integers. Last time we saw that \ast is commutative and associative, and has \varepsilon(n) as an identity, where \varepsilon is the function which produces 1 for an input of 1 and 0 for all other inputs.

So what does the Möbius function have to do with this? Let’s start by considering a different function:

\displaystyle \mathbf{1}(n) = 1

is the function which ignores its input and always returns 1. Of course this is not the same thing as \varepsilon, and despite its name \mathbf{1} is indeed not an identity for Dirichlet convolution. That is, if we take some function f and find its Dirichlet convolution with \mathbf{1}, we don’t get f again—but we do get something interesting:

\displaystyle (\mathbf{1} \ast f)(n) = \sum_{ab=n} \mathbf{1}(a) f(b) = \sum_{ab=n} f(b) = \sum_{d \mid n} f(d)

The first step is the definition of \ast; the second step is the definition of \mathbf{1}; and the last step just notes that the sum of all f(b) where ab = n is the same as taking the sum of f(d) for all divisors of n.

That is, \mathbf{1} \ast f is the function which for a given n outputs not just f(n) but the sum of f over all divisors of n.

We’re now ready to see how \mu enters the picture!

Claim: \mathbf{1} \ast \mu = \varepsilon.

That is, \mu is the inverse of \mathbf{1} with respect to Dirichlet convolution. In my next post we’ll see some cool implications of this; for now, let’s just prove it.

Proof. Here’s the proof. Ready?

\displaystyle (\mathbf{1} \ast \mu)(n) = \sum_{d \mid n} \mu(d) = \varepsilon(n).

Actually, there was hardly anything left to prove! The first equality is because of what we just showed about taking the Dirichlet convolution of \mathbf{1} with something else. And the second equality is a property of \mu we just finished proving in some previous posts.

Posted in combinatorics, proof | Tagged , , , | 1 Comment

Dirichlet convolution

Let f and g be two functions defined on the positive integers. Then the Dirichlet convolution of f and g, written f \ast g, is another function on the positive integers, defined as follows:

\displaystyle (f \ast g)(n) = \sum_{ab = n} f(a) g(b)

The sum is taken over all possible factorizations of n into a product ab of positive integers. For example, suppose f(n) = n and g(n) = n^2. Then

\displaystyle \begin{array}{rcl}(f \ast g)(6) &=& \displaystyle \sum_{ab = 6} f(a) g(b) \\[1.5em] &=& f(1)g(6) + f(2)g(3) + f(3)g(2) + f(6)g(1) \\[1em] &=& 1 \cdot 6^2 + 2 \cdot 3^2 + 3 \cdot 2^2 + 6 \cdot 1^2 \\[1em] &=& 36 + 18 + 12 + 6 = 72.\end{array}

At this point this may seem somewhat arbitrary, but over the next few posts we’ll see that it’s not arbitrary at all—this operation is deeply connected with the Möbius function \mu and hence with primes and factorization.

So what properties does \ast have?

  • It’s commutative, that is, f \ast g = g \ast f. This follows directly from the commutativity of multiplication:

    \displaystyle (f \ast g)(n) = \sum_{ab = n} f(a) g(b) = \sum_{ba = n} g(b) f(a) = (g \ast f)(n).

  • It’s also associative, that is, f \ast (g \ast h) = (f \ast g) \ast  h. This also follows from the associativity of multiplication, and distributivity of multiplication over addition. The proof is kind of fiddly but not really difficult:

    \displaystyle \begin{array}{rcl} (f \ast (g \ast h))(n) &=& \displaystyle \sum_{ab = n} f(a) (g \ast h)(b) \\[1.5em] &=& \displaystyle \sum_{ab = n} f(a) \left( \sum_{cd = b} g(c) h(d) \right) \\[1.5em] &=& \displaystyle \sum_{ab = n} \sum_{cd = b} f(a) \left( g(c) h(d) \right) \\[1.5em] &=& \displaystyle \sum_{a(cd) = n} f(a) (g(c) h(d)) \\[1.5em] &=& \displaystyle \sum_{(ac)d = n} (f(a) g(c)) h(d) \\[1.5em] &=& \displaystyle \sum_{ed = n} \sum_{ac=e} (f(a) g(c)) h(d) \\[1.5em] &=& \displaystyle \sum_{ed = n} \left(\sum_{ac=e} f(a)g(c)\right) h(d) \\[1.5em] &=& \displaystyle \sum_{ed = n} (f \ast g)(e) h(d) \\[1.5em] &=& ((f \ast g) \ast h)(n) \end{array}

  • Define the function \varepsilon as follows:

    \displaystyle \varepsilon(n) = \begin{cases} 1 & \text{if } n=1 \\ 0 & \text{otherwise.} \end{cases}

    Then I claim that \varepsilon is an identity element for \ast, that is, for any function f we have f \ast \varepsilon = \varepsilon \ast f = f. By definition,

    \displaystyle \displaystyle (f \ast \varepsilon)(n) = \sum_{ab = n}f(a) \varepsilon(b)

    But \varepsilon(b) = 0 for every b other than 1, which cancels out most of the terms of the sum. The only term that does not become zero is f(n) \varepsilon(1) = f(n). So the right-hand sum reduces to just f(n), that is, f \ast \varepsilon = f. Since we already know \ast is commutative this proves that \varepsilon \ast f = f too.

So, to sum up1, we have defined the Dirichlet convolution \ast, and shown that it is associative, commutative, and has \varepsilon as an identity element. That is, to use some fancy math words, we have shown that (\ast, \varepsilon) is a commutative monoid over the set of functions defined on the positive integers.2 Next time we will see what this has to do with the Möbius function \mu.

  1. Haha.

  2. Technically we should also specify the codomain of the functions, but it doesn’t matter much: typically it is taken to be the complex numbers, but really anything with a commutative, associative multiplication that distributes over addition will do.

Posted in combinatorics, proof | Tagged , , , | 8 Comments

The Möbius function proof, part 2 (the subset parity lemma)

Continuing from my previous post, we are in the middle of proving that \mu satisfies the same equation as S, that is,

\displaystyle \sum_{d \mid n} \mu(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1,\end{cases}

and that therefore \mu(n) = S(n) for all n \geq 1, that is, \mu(n) is the sum of all the nth primitive roots of unity.

We had gotten as far as the following lemma:

Subset parity lemma: any nonempty finite set has an equal number of even-sized and odd-sized subsets.

In fact, this lemma is the only remaining piece of the proof: if we can prove this then we will have a complete proof that S(n) = \mu(n). Just for fun, we’re actually going to consider two different proofs of this lemma today.

The first proof

The first proof will be by a combinatorial or counting argument, and was suggested in a comment by Andrey Mokhov. The idea is to match up the subsets in such a way that:

  • Each subset is matched with exactly one other subset, that is, the matching is 1-1.
  • Each even subset is matched with an odd subset, and vice versa.

Since the subsets are in pairs and each pair has one even and one odd subset, the inescapable conclusion will be that there are the same number of even and odd subsets. For example, suppose there are a bunch of people in a room and you tell them to get into pairs where each pair has one person with a hat and one person without a hat. If they are able to do this exactly, with no unpaired people left over, then you know (without actually having to count people or hats!) that there must be an equal number of hatted and hatless people.

Call the entire set T. Let’s first consider the case where T has an odd number of elements. How will we match up its subsets? That is, given some subset U \subseteq T, which other subset should U be matched with? A natural idea—which turns out to be the right one in this case—is to match U with its complement T - U, that is, the subset of elements from T which are not in U. Since the complement of U’s complement is U again (and also because U cannot be its own complement), this does indeed put the subsets of T into pairs. Also, if |U| is even, then |T - U| = |T| - |U| = \text{odd} - \text{even} = \text{odd} (and vice versa if |U| is odd). So each pair has one even and one odd subset.

A picture will probably help. Here’s an example where |T| = 5. As you can check, each row in the picture is a pair of subsets which are each other’s complement, with one even subset and one odd subset:

Now, what about if |T| is even? Matching U with T - U doesn’t work anymore, since they have the same parity—both are even or both are odd. But we can use a cute trick to fix things. Pick some particular element t \in T (note, this is the place in the proof where we depend on the assumption that T is nonempty!)—for example, we might pick the smallest element if they can be ordered, but it really doesn’t matter which one we pick. Previously, when complementing a subset U, we “flipped” the membership of each element of T: if x is element of U, then x is not an element of T - U, and vice versa. Now we do the same, except that we don’t flip t: if t \in U we keep it; if t \not\in U we leave it out. But we still flip all the other elements. Here’s an example when |T| = 4:

Note how we match all the sets that don’t have the smallest (blue) element with each other, and likewise all the sets that do have the smallest element are matched with each other. But within each pair, all the elements other than the smallest one are flipped.

You can convince yourself that this does indeed match up all the subsets into pairs. Moreover, each pair has one even and one odd subset, because the sizes of the paired sets add up to an odd number: n-1 if they don’t contain t, or n + 1 if they do. QED.

And that’s it for the first proof! We’ve shown that any nonempty finite set has the same number of even and odd subsets, because we can always match them up perfectly.

Binomial coefficient sums

And now for a small tangent. Remember that a set of size n has \binom{n}{k} subsets of size k, where \binom n k is a binomial coefficient, or an entry in Pascal’s Triangle. Since we now know there are the same number of even and odd subsets, if we add \binom{n}{k} when k is even and subtract when k is odd, we must get zero:

\displaystyle \sum_{0 \leq k \leq n} (-1)^k \binom n k = 0

That is, the alternating sum of each row of Pascal’s Triangle (besides the zeroth row) is 0:

\displaystyle \begin{array}{ccccccccccc} &&&1&-&1&&&&=&0 \\ &&1&-&2&+&1&&&=&0 \\ &1&-&3&+&3&-&1&&=&0 \\ 1&-&4&+&6&-&4&+&1&=&0 \end{array}

and so on. As you can see, this is already really obvious for odd rows, for example, 1-3+3-1, since equal numbers cancel out. This is why in some sense the proof was so easy for odd n: we just took the complement of each U. However, for even rows, such as 1-4+6-4+1, it is not as obvious: why should we have 1+6+1 = 4+4? So the proof for even n required a bit more cleverness.

In any case, this discussion of binomial coefficiencts brings us to:

The second proof

The second proof was hinted at in a comment by decourse. Recall the Binomial Theorem:

\displaystyle (x+y)^n = \sum_{0 \leq k \leq n} \binom n k x^{n-k} y^k

Set x = 1 and y = -1. Then the Binomial Theorem says that

\displaystyle 0 = 0^n = (1-1)^n = \sum_{0 \leq k \leq n} \binom n k (-1)^k.

And voila! The fact that this sum is zero is the same as saying that there are the same number of even and odd subsets. So we can see that the subset parity lemma is really just a special case of the Binomial Theorem. QED.


Which proof is better? I like them both. The second proof is really short, and would be your best bet to quickly communicate the idea of the proof to someone who already knows the Binomial Theorem. On the other hand, although it’s longer, the first proof is really cute and feels simpler somehow—it doesn’t depend on the Binomial Theorem and doesn’t even use any actual numbers or equations.

So, where are we? We’ve proved the subset parity lemma. But this means that

\displaystyle \sum_{d \mid n} \mu(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1,\end{cases}

since we saw that this amounts to looking at subsets of the set of unique prime factors of n, and adding 1 for even subsets and -1 for odd subsets.

This proves that \mu(n) = S(n) (the sum of primitive nth roots of unity) since S(n) satisfies the same equation. But it’s also a useful fact in its own right, will play an important role in our continuing exploration of the Möbius function \mu.

Posted in arithmetic, combinatorics, complex numbers, primes, proof | Tagged , , , , , , , , , | 2 Comments

The Möbius function proof, part 1

In my last post, I introduced the Möbius function \mu(n), which is defined in terms of the prime factorization of n:

  • \mu(n) = 0 if n has any repeated prime factors, that is, if n is divisible by a perfect square.
  • Otherwise, if n has k distinct prime factors, \mu(n) = (-1)^k. In other words, \mu(n) = 1 if n has an even number of distinct prime factors, and -1 if the number of prime factors is odd.

Previously, we also considered the sum S(n) of all the primitive nth roots of unity. Today, we will begin proving they are the same: S(n) = \mu(n) for all n \geq 1! This is surprising since it is not immediately obvious what the two functions have to do with each other: one is about sums of complex numbers, and the other is about prime factorization. On the other hand, perhaps it is not all that surprising after all: we’ve already seen that primitive roots of unity have a lot to do with divisibility and prime factorization. In any case, this connection is the key that cracks open a lot of really cool mathematics.

So, how will we do it? Recall that we proved S(n) satisfies this equation:

\displaystyle \sum_{d \mid n} S(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1.\end{cases}

In the case of S(n) this was fairly “obvious”: we proved it essentially by staring at some pictures. But we will show that \mu(n) also satisfies this equation, that is,

\displaystyle \sum_{d \mid n} \mu(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1.\end{cases}

(In contrast to S(n), this is not so obvious! But it’s true, and we will prove it.) Given a starting value for n = 1, we have seen that this equation uniquely determines all values of a function for n > 1: for example, given S(1), we can use the equation to uniquely determine S(2); given S(1) and S(2), we can then uniquely determine S(3); and so on. So, if S(n) and \mu(n) both satisfy this equation, they must be the same for all n as long as they have the same starting value—and they do, since S(1) = \mu(1) = 1.

I have to say that this is one of my favorite proofs—it rests on a beautiful little combinatorial argument, and, as we’ll see, it gives some insight into the Principle of Inclusion-Exclusion as well. We’ll start the proof today and then see the really beautiful part in my next post. So, without further ado:

Theorem. \displaystyle \sum_{d \mid n} \mu(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1.\end{cases}

Corollary. \mu(n) = S(n) for all n \geq 1.

Proof. Since \mu(1) = 1, we can already see that \sum_{d \mid 1} \mu(d) = \mu(1) = 1 as desired. So now consider \sum_{d \mid n} \mu(d) for some n > 1; we wish to show this sum is zero.

Since \mu(d) = 0 for values of d which are divisble by a perfect square, those values of d contribute nothing to the sum. We can therefore concentrate on only those d which are not divisible by a perfect square (we call such numbers squarefree). Each squarefree d must be a product of distinct prime factors, since if any prime factor was repeated, then d would be divisible by a perfect square. Since d \mid n, the prime factorization of each squarefree d is therefore a subset of the set of distinct prime factors of n, and conversely, each subset of the distinct prime factors of n yields a distinct squarefree divisor d.

For example, consider n = 360 = 2^3 \cdot 3^2 \cdot 5. Each squarefree divisor of n can have at most one copy of each of 2, 3, and 5. There are therefore 2^3 = 8 squarefree divisors of n, one corresponding to each subset of \{2, 3, 5\}:

\displaystyle \begin{array}{rcr} d & \text{factorization} & \mu(d) \\ 1 & & 1 \\ 2 & 2 & -1 \\ 3 & 3 & -1 \\ 5 & 5 & -1 \\ 6 & 2 \cdot 3 & 1 \\ 10 & 2 \cdot 5 & 1 \\ 15 & 3 \cdot 5 & 1 \\ 30 & 2 \cdot 3 \cdot 5 & -1 \end{array}

Adding up all the values of \mu(d) listed in the right column, we can see that there are four 1’s and four -1’s, which cancel to yield 0 as claimed.

As another example, for n = 21 = 3 \cdot 7, all of the divisors of n are squarefree: we have

\displaystyle \mu(1) + \mu(3) + \mu(7) + \mu(3 \cdot 7) = 1 - 1 - 1 + 1 = 0.

Our goal is to show that this sum is always zero, and the only way for it to be zero is if there are always an equal number of 1’s and -1’s. Given the definition of \mu, we know that the 1’s come from even-sized subsets of the distinct prime factors of n, and the -1’s come from odd-sized subsets. So in other words, we need to show that there are the same number of even-sized subsets as there are odd-sized subsets.

The fact that these are sets of distinct prime factors really makes no difference, since we are just counting them. What we really need, in essence, is the following lemma:

Subset parity lemma: any nonempty finite set has an equal number of even-sized and odd-sized subsets.

Can you see how to prove this? (And as an aside, do you see why it is false if we exclude the word “nonempty”? Do you see what this has to do with the n=1 versus n > 1 cases?) My next post will explain a nice proof of the lemma and then conclude the proof that \mu(n) = S(n)!

Posted in Uncategorized | Tagged , , , , , , , , , | 9 Comments

The Möbius function

Time to pull back the curtain a bit! My recent series of posts on complex roots of unity may seem somewhat random and unmotivated so far, but the fact is that I definitely have a destination in mind—we are slowly and inexorably heading towards some really deep and beautiful mathematics. It has just taken a while to build up all the requisite concepts!

Last time, we defined S(n) as the sum of all the primitive nth roots of unity. We noted that

\displaystyle \sum_{d \mid n} S(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1\end{cases}

and used this to deduce values of S(n). In principle we could carry this as far as we want to compute S(n) for any n, but this doesn’t necessarily give us any insight into the nature of S. For example, is S(n) always either -1, 0, or 1? Is there a nicer way to characterize S(n), and/or a quicker way to compute it?

It turns out that S(n) is more than just an idle curiosity. In fact, it is famous enough that it has a name: it is known as the Möbius function, and it is usually written \mu(n). There are indeed nicer ways to characterize and compute \mu(n). Here’s how it is most commonly defined:

  • \mu(n) = 0 if n has any repeated prime factors, that is, if n is divisible by a perfect square.
  • Otherwise, if n has k distinct prime factors, \mu(n) = (-1)^k. In other words, \mu(n) = 1 if n has an even number of distinct prime factors, and -1 if the number of prime factors is odd.

For example:

  • \mu(8) = 0 since 8 is divisible by a perfect square (namely, 4).
  • \mu(5) = -1 since 5 has an odd number of prime factors (namely, one).
  • \mu(10) = 1 since 10 has two distinct prime factors.
  • \mu(30) = -1 since 30 has three distinct prime factors.
  • \mu(1) = 1 since 1 has zero prime factors, and zero is even.

So far, this seems to match up with what we had already deduced about S(n), but it is not at all obvious whether S(n) and \mu(n) are the same. Why should adding up a particular set of complex numbers have anything to do with the number of prime factors of n? In my next post, we’ll prove that they are in fact the same. The proof is a nice exercise in combinatorics and relates to Pascal’s Triangle. (Michael Paul Goldenberg hinted at such a proof in a comment on my previous post.)

As you might guess, \mu is deeply related to prime numbers and the Riemann zeta function. It turns out it is also related to the Principle of Inclusion-Exclusion. All this and more to come!

Posted in Uncategorized | Tagged , , , , , , , , | 5 Comments

Computing sums of primitive roots

Remember this picture?

It, and other pictures like it, express the fact that for a given n, if we take the primitive roots for each of the divisors of n, together they make up exactly the set of all nth roots of unity. The above picture is for the specific case of n=12: the 12th roots of unity (the dots on the bottom circle) are composed of the primitive roots for n=1, 2, 3, 4, 6, and 12 (the dots on each of the top circles). I proved this in another post.

Of course, if two sets of complex numbers are the same, then their sums must also be the same. Let’s write S(n) for the sum of all the primitive nth roots: in my previous post we worked out S(n) for certain n but weren’t sure how to compute it in other cases. Well, today that’s going to change!

We already know by symmetry that the sum of all the nth roots of unity is zero, except when n=1 in which case the sum is 1. Putting all of this together,

\displaystyle \sum_{0 \leq k < n} \omega_n^k = \sum_{d \mid n} S(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1\end{cases}

That is, the sum of all nth roots of unity is the same as summing the primitive roots, S(d), for each divisor d of n. (The notation d \mid n means d evenly divides n, so the summation symbol with d \mid n underneath means we are summing over all divisors of n.)

So what have we gained? Well, we can use this equation “backwards” to compute values for S(d)!

  • We already know S(1) = 1.
  • When n = 2, the equation tells us that S(1) + S(2) = 0, so it must be that S(2) = -1. (Of course we already knew that too.)
  • When n = 3, we have S(1) + S(3) = 0 (note that S(2) is not included since 2 is not a divisor of 3), so S(3) = -1 as well.
  • S(1) + S(2) + S(4) = 1 + (-1) + S(4) = 0, so S(4) = 0, which also checks out with our previous knowledge.
  • Recall that S(10) was the first value we were unsure about. Well, S(1) + S(2) + S(5) + S(10) = 1 + (-1) + (-1) + S(10) = 0, so S(10) must be 1.

And so on. Since each value of S(n) can be computed in this way once we know S(d) for all the divisors of n (which are smaller than n), we can continue filling in a table of values of S(n) like this forever.

\displaystyle \begin{array}{c|rrrrrrrrrrrrrrrrrrrr} n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \hline S(n) & 1 & -1 & -1 & 0 & -1 & 1 & -1 & 0 & 0 & 1 & -1 & 0 & -1 & 1 & 1 & 0 & -1 & 0 & -1 & 0 \end{array}

For example, to fill in S(20) we compute S(1) + S(2) + S(4) + S(5) + S(10) + S(20) = 1 - 1 + 0 - 1 + 1 + S(20) = 0 and hence S(20) = 0.

Do you notice any patterns? It is not too hard to see that S(n) must always be an integer (why?), but so far it has always been either -1, 0, or 1; will it always be one of those three values? Before my next post you might like to try extending the table of values for S(n) further and exploring these questions yourself!

Posted in geometry, pictures | Tagged , , , , , , | 9 Comments