More fun with Dirichlet convolution

I’m back after a bit of a hiatus for the holidays! Last time we saw how the principle of Möbius inversion arises from considering the \mu function from the point of view of Dirichlet convolution. Put simply, the Möbius function \mu is the inverse of \mathbf{1} with respect to Dirichlet convolution. As an example, we noted that \mathbf{1} \ast \varphi = \mathit{id} (that is, the sum of \varphi(d) over all divisors d of n is equal to n), and hence by Möbius inversion \varphi = \mu \ast \mathit{id}.

This is far from earth-shattering, but it’s fun to see how a number-theoretic function like \varphi arises in a simple formula involving Dirichlet convolution, and how Möbius inversion allows us to quickly derive a related but non-obvious fact. Let’s do a few more!

First, let’s consider \mathbf{1} \ast \mathbf{1}. We have

\displaystyle (\mathbf{1} \ast \mathbf{1})(n) = \sum_{d \mid n} \mathbf{1}(d) \mathbf{1}(n/d) = \sum_{d \mid n} 1,

which is just counting the number of divisors of n. This function is often denoted \tau. For example, 12 has six divisors (namely, 1, 2, 3, 4, 6, and 12), so \tau(12) = 6. Likewise \tau(7) = 2 (1 and 7), \tau(4) = 3 (1, 2, and 4), and so on.

The above equation can be restated as \mathbf{1} \ast \mathbf{1} = \tau, so by Möbius inversion, we immediately conclude that \mathbf{1} = \mu \ast \tau, that is,

\displaystyle 1 = \sum_{ab = n} \mu(a) \tau(b).

For example, we can check this for n = 12:

\displaystyle \begin{array}{l} \mu(1) \tau(12) + \mu(2) \tau(6) + \mu(3) \tau(4)\\[1em] + \mu(4) \tau(3) + \mu(6) \tau(2) + \mu(12) \tau(1) \\[1em] = 6 - 4 - 3 + 0 + 2 + 0 \\[1em] = 1 \end{array}

indeed.

As another example, \mathbf{1} \ast \mathit{id} gives us \sum_{d \mid n} d, the sum of the divisors of n. This is usually denoted \sigma. For example, \sigma(12) = 1+2+3+4+6+12 = 28, \sigma(6) = 1+2+3+6 = 12, \sigma(7) = 1+7 = 8, and so on. Often we also define s(n) = \sigma(n) - n to be the sum of all the divisors of n other than n itself, i.e. the proper divisors. Perfect numbers like 6 and 28 are those for which n = s(n).

Again, since \mathbf{1} \ast \mathit{id} = \sigma, by Möbius inversion we immediately conclude \mathit{id} = \mu \ast \sigma; for example, again when n = 12, we have

\displaystyle \begin{array}{l} \mu(1) \sigma(12) + \mu(2) \sigma(6) + \mu(3) \sigma(4)\\[1em] + \mu(4) \sigma(3) + \mu(6) \sigma(2) + \mu(12) \sigma(1) \\[1em] = 28 - 12 - 7 + 0 + 3 + 0 \\[1em] = 12. \end{array}

Posted in number theory | Tagged , , , , , , | Leave a comment

Paper torus with Villarceau circles

I made another thing!

This is a torus, made from 24 crescent-shaped pieces of paper with slots cut into them so they interlock with each other. I followed these instructions on cutoutfoldup.com. There is also a template with some ideas for nice variations here.

The idea of this model is to highlight Villarceau circles. Everyone knows how to find circular cross-sections of a torus, right? You can cut it horizontally (like cutting a bagel in half) in which case you get two concentric circles:

Or you can cut it vertically (like cutting a donut in half, if you wanted to share it with someone else), in which case you get two separate circles:

But it turns out there is yet another way to cut a torus to get circular cross-sections, by cutting it with a diagonal plane:

Note that the plane is tangent to the torus at the two places where the circles intersect. These circles are called Villarceau circles, named after French mathematician Yvon Villarceau, who published a paper about them in 1848. The paper model highlights these circles: each circle is composed of edges from two differently-colored crescents; the points of the crescents come together at the tangent points where two Villarceau circles intersect.

If you want to try making this model yourself, be warned: it is quite difficult! The five-star difficulty rating is no joke. The time from when I first printed out the templates for cutting until the time I finally finished it was several months. Partly that was because I only worked off and on and often went weeks without touching it. But partly it was also because I gave up in frustration a few times. The first time I attempted to assemble all the pieces it ended up completely falling apart. But the second time went much better (using what I had learned from my first failed attempt).

Posted in geometry | Tagged , , , , , | Leave a comment

Möbius inversion

In my last post we saw that \mu \ast \mathbf{1} = \varepsilon, that is, the Möbius function \mu is the inverse of \mathbf{1} with respect to Dirichlet convolution. This directly leads to an interesting principle called Möbius inversion.

Möbius inversion. Suppose g(n) is defined for n \geq 1 as the sum of another function f over all the divisors of n:

\displaystyle g(n) = \sum_{d \mid n} f(d).

Then we can “invert” this relationship to express f as a sum over g:

\displaystyle f(n) = \sum_{d \mid n} \mu(d) g(n/d).

The above is the “traditional” way of stating the principle, but I like to write it in this equivalent, more symmetric way:

\displaystyle f(n) = \sum_{ab = n} \mu(a) g(b)

where, as usual, the sum is over all factorizations of n into a product ab. This is the same as the previous equation, because ab = n if and only if a \mid n, in which case b = n/a. But now it becomes more clear that we are dealing with a Dirichlet convolution. Can you see how to prove this?

Proof. If we think in terms of Dirichlet convolution, the proof is almost trivial. Note first that saying g(n) = \sum_{d \mid n} f(n) is the same as saying g = \mathbf{1} \ast f (recall that taking the Dirichlet convolution with \mathbf{1} is the same as summing over all divisors). We can then take the Dirichlet convolution of both sides with \mu:

\displaystyle \begin{array}{rcl} \mu \ast g &=& \mu \ast (\mathbf{1} \ast f) \\ &=& (\mu \ast \mathbf{1}) \ast f \\ &=& \varepsilon \ast f \\ &=& f.\end{array}

In other words, Möbius inversion is really just saying that if f = \mathbf{1} \ast g, then \mu \ast f = g, because \mu is the inverse of \mathbf{1}.

Let’s try an example! Recall that \varphi denotes the Euler totient function, which counts how many positive integers less than or equal to n are relatively prime to n. For example, \varphi(5) = 4 since all the positive integers less than or equal to 5 (other than 5 itself) are relatively prime to it, and more generally \varphi(p) = p-1 for any prime p. As another example, \varphi(12) = 4 since the only positive integers between 1 and 11 which are relative prime to 12 are 1, 5, 7, and 11. In a previous post we considered a visual proof that

\displaystyle \sum_{d \mid n} \varphi(d) = n:

For example, as seen in the picture above, 12 = \varphi(1) + \varphi(2) + \varphi(3) + \varphi(4) + \varphi(6) + \varphi(12) = 1 + 1 + 2 + 2 + 2 + 4.

Now, this is in the perfect format to apply Möbius inversion: the sum over all divisors of the function g(n) = \varphi(n) is equal to the function f(n) = n. So we conclude that

\displaystyle \varphi(n) = \sum_{d \mid n} \mu(d) (n/d)

or, factoring out n,

\displaystyle \varphi(n) = n \sum_{d \mid n} \frac{\mu(d)}{d}

which is interesting since it expresses \varphi(n) as a fraction of n.

Let’s use this to compute \varphi(360). We have

\displaystyle \varphi(360) = \sum_{d \mid 360} \mu(d) (360/d).

The prime factorization of 360 is 2^3 \cdot 3^2 \cdot 5. We know that \mu(d) will be 0 for any divisor with any repeated factors, so those all cancel out; we only need to consider divisors which correspond to subsets of \{2,3,5\}. (n = 360 is the same example we used in our proof of the key property of the Möbius function.) So

\displaystyle \begin{array}{rcl} \varphi(360) &=& \mu(1) \cdot 360 \\ &+& \mu(2) \cdot 180 \\ &+& \mu(3) \cdot 120 \\ &+& \mu(5) \cdot 72 \\ &+& \mu(2\cdot 3) \cdot 60 \\ &+& \mu(2 \cdot 5) \cdot 36 \\ &+& \mu(3 \cdot 5) \cdot 24 \\ &+& \mu(2 \cdot 3 \cdot 5) \cdot 12 \end{array}

Remembering that \mu yields 1 for an even number of prime factors and -1 for an odd number, we can evaluate this as

\displaystyle \varphi(360) = 360 - 180 - 120 - 72 + 60 + 36 + 24 - 12 = 96.

(and we can use e.g. a computer to verify that this is correct). Note how this works: we start with 360 and then subtract out all the numbers divisible by 2 (all 180 of them), as well as those divisible by 3 (120) and 5 (72). But now we have subtracted some numbers multiple times, e.g. those which are divisible by both 2 and 3. So we add back in the numbers which are divisible by 2 \cdot 3 (60 of them), by 2 \cdot 5 (36), and by 3 \cdot 5 (24). Finally, we’ve now overcounted numbers which are divisible by all three, so we subtract off the 12 numbers which are divisible by 2 \cdot 3 \cdot 5. Egad! This seems really similar to PIE! Indeed, this is no accident: it turns out that the Principle of Inclusion-Exclusion is really just another kind of Möbius inversion. But that’s a subject for another post!

Posted in combinatorics, proof | Tagged , , , , | 3 Comments

Dirichlet convolution and the Möbius function

Recall from last time that the Dirichlet convolution of two functions f and g is written f \ast g and defined by:

\displaystyle (f \ast g)(n) = \sum_{ab = n} f(a) g(b)

where the sum is taken over all possible factorizations of n into a product ab of positive integers. Last time we saw that \ast is commutative and associative, and has \varepsilon(n) as an identity, where \varepsilon is the function which produces 1 for an input of 1 and 0 for all other inputs.

So what does the Möbius function have to do with this? Let’s start by considering a different function:

\displaystyle \mathbf{1}(n) = 1

is the function which ignores its input and always returns 1. Of course this is not the same thing as \varepsilon, and despite its name \mathbf{1} is indeed not an identity for Dirichlet convolution. That is, if we take some function f and find its Dirichlet convolution with \mathbf{1}, we don’t get f again—but we do get something interesting:

\displaystyle (\mathbf{1} \ast f)(n) = \sum_{ab=n} \mathbf{1}(a) f(b) = \sum_{ab=n} f(b) = \sum_{d \mid n} f(d)

The first step is the definition of \ast; the second step is the definition of \mathbf{1}; and the last step just notes that the sum of all f(b) where ab = n is the same as taking the sum of f(d) for all divisors of n.

That is, \mathbf{1} \ast f is the function which for a given n outputs not just f(n) but the sum of f over all divisors of n.

We’re now ready to see how \mu enters the picture!

Claim: \mathbf{1} \ast \mu = \varepsilon.

That is, \mu is the inverse of \mathbf{1} with respect to Dirichlet convolution. In my next post we’ll see some cool implications of this; for now, let’s just prove it.

Proof. Here’s the proof. Ready?

\displaystyle (\mathbf{1} \ast \mu)(n) = \sum_{d \mid n} \mu(d) = \varepsilon(n).

Actually, there was hardly anything left to prove! The first equality is because of what we just showed about taking the Dirichlet convolution of \mathbf{1} with something else. And the second equality is a property of \mu we just finished proving in some previous posts.

Posted in combinatorics, proof | Tagged , , , | 1 Comment

Dirichlet convolution

Let f and g be two functions defined on the positive integers. Then the Dirichlet convolution of f and g, written f \ast g, is another function on the positive integers, defined as follows:

\displaystyle (f \ast g)(n) = \sum_{ab = n} f(a) g(b)

The sum is taken over all possible factorizations of n into a product ab of positive integers. For example, suppose f(n) = n and g(n) = n^2. Then

\displaystyle \begin{array}{rcl}(f \ast g)(6) &=& \displaystyle \sum_{ab = 6} f(a) g(b) \\[1.5em] &=& f(1)g(6) + f(2)g(3) + f(3)g(2) + f(6)g(1) \\[1em] &=& 1 \cdot 6^2 + 2 \cdot 3^2 + 3 \cdot 2^2 + 6 \cdot 1^2 \\[1em] &=& 36 + 18 + 12 + 6 = 72.\end{array}

At this point this may seem somewhat arbitrary, but over the next few posts we’ll see that it’s not arbitrary at all—this operation is deeply connected with the Möbius function \mu and hence with primes and factorization.

So what properties does \ast have?

  • It’s commutative, that is, f \ast g = g \ast f. This follows directly from the commutativity of multiplication:

    \displaystyle (f \ast g)(n) = \sum_{ab = n} f(a) g(b) = \sum_{ba = n} g(b) f(a) = (g \ast f)(n).

  • It’s also associative, that is, f \ast (g \ast h) = (f \ast g) \ast  h. This also follows from the associativity of multiplication, and distributivity of multiplication over addition. The proof is kind of fiddly but not really difficult:

    \displaystyle \begin{array}{rcl} (f \ast (g \ast h))(n) &=& \displaystyle \sum_{ab = n} f(a) (g \ast h)(b) \\[1.5em] &=& \displaystyle \sum_{ab = n} f(a) \left( \sum_{cd = b} g(c) h(d) \right) \\[1.5em] &=& \displaystyle \sum_{ab = n} \sum_{cd = b} f(a) \left( g(c) h(d) \right) \\[1.5em] &=& \displaystyle \sum_{a(cd) = n} f(a) (g(c) h(d)) \\[1.5em] &=& \displaystyle \sum_{(ac)d = n} (f(a) g(c)) h(d) \\[1.5em] &=& \displaystyle \sum_{ed = n} \sum_{ac=e} (f(a) g(c)) h(d) \\[1.5em] &=& \displaystyle \sum_{ed = n} \left(\sum_{ac=e} f(a)g(c)\right) h(d) \\[1.5em] &=& \displaystyle \sum_{ed = n} (f \ast g)(e) h(d) \\[1.5em] &=& ((f \ast g) \ast h)(n) \end{array}

  • Define the function \varepsilon as follows:

    \displaystyle \varepsilon(n) = \begin{cases} 1 & \text{if } n=1 \\ 0 & \text{otherwise.} \end{cases}

    Then I claim that \varepsilon is an identity element for \ast, that is, for any function f we have f \ast \varepsilon = \varepsilon \ast f = f. By definition,

    \displaystyle \displaystyle (f \ast \varepsilon)(n) = \sum_{ab = n}f(a) \varepsilon(b)

    But \varepsilon(b) = 0 for every b other than 1, which cancels out most of the terms of the sum. The only term that does not become zero is f(n) \varepsilon(1) = f(n). So the right-hand sum reduces to just f(n), that is, f \ast \varepsilon = f. Since we already know \ast is commutative this proves that \varepsilon \ast f = f too.

So, to sum up1, we have defined the Dirichlet convolution \ast, and shown that it is associative, commutative, and has \varepsilon as an identity element. That is, to use some fancy math words, we have shown that (\ast, \varepsilon) is a commutative monoid over the set of functions defined on the positive integers.2 Next time we will see what this has to do with the Möbius function \mu.


  1. Haha.

  2. Technically we should also specify the codomain of the functions, but it doesn’t matter much: typically it is taken to be the complex numbers, but really anything with a commutative, associative multiplication that distributes over addition will do.

Posted in combinatorics, proof | Tagged , , , | 7 Comments

The Möbius function proof, part 2 (the subset parity lemma)

Continuing from my previous post, we are in the middle of proving that \mu satisfies the same equation as S, that is,

\displaystyle \sum_{d \mid n} \mu(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1,\end{cases}

and that therefore \mu(n) = S(n) for all n \geq 1, that is, \mu(n) is the sum of all the nth primitive roots of unity.

We had gotten as far as the following lemma:

Subset parity lemma: any nonempty finite set has an equal number of even-sized and odd-sized subsets.

In fact, this lemma is the only remaining piece of the proof: if we can prove this then we will have a complete proof that S(n) = \mu(n). Just for fun, we’re actually going to consider two different proofs of this lemma today.

The first proof

The first proof will be by a combinatorial or counting argument, and was suggested in a comment by Andrey Mokhov. The idea is to match up the subsets in such a way that:

  • Each subset is matched with exactly one other subset, that is, the matching is 1-1.
  • Each even subset is matched with an odd subset, and vice versa.

Since the subsets are in pairs and each pair has one even and one odd subset, the inescapable conclusion will be that there are the same number of even and odd subsets. For example, suppose there are a bunch of people in a room and you tell them to get into pairs where each pair has one person with a hat and one person without a hat. If they are able to do this exactly, with no unpaired people left over, then you know (without actually having to count people or hats!) that there must be an equal number of hatted and hatless people.

Call the entire set T. Let’s first consider the case where T has an odd number of elements. How will we match up its subsets? That is, given some subset U \subseteq T, which other subset should U be matched with? A natural idea—which turns out to be the right one in this case—is to match U with its complement T - U, that is, the subset of elements from T which are not in U. Since the complement of U’s complement is U again (and also because U cannot be its own complement), this does indeed put the subsets of T into pairs. Also, if |U| is even, then |T - U| = |T| - |U| = \text{odd} - \text{even} = \text{odd} (and vice versa if |U| is odd). So each pair has one even and one odd subset.

A picture will probably help. Here’s an example where |T| = 5. As you can check, each row in the picture is a pair of subsets which are each other’s complement, with one even subset and one odd subset:

Now, what about if |T| is even? Matching U with T - U doesn’t work anymore, since they have the same parity—both are even or both are odd. But we can use a cute trick to fix things. Pick some particular element t \in T (note, this is the place in the proof where we depend on the assumption that T is nonempty!)—for example, we might pick the smallest element if they can be ordered, but it really doesn’t matter which one we pick. Previously, when complementing a subset U, we “flipped” the membership of each element of T: if x is element of U, then x is not an element of T - U, and vice versa. Now we do the same, except that we don’t flip t: if t \in U we keep it; if t \not\in U we leave it out. But we still flip all the other elements. Here’s an example when |T| = 4:

Note how we match all the sets that don’t have the smallest (blue) element with each other, and likewise all the sets that do have the smallest element are matched with each other. But within each pair, all the elements other than the smallest one are flipped.

You can convince yourself that this does indeed match up all the subsets into pairs. Moreover, each pair has one even and one odd subset, because the sizes of the paired sets add up to an odd number: n-1 if they don’t contain t, or n + 1 if they do. QED.

And that’s it for the first proof! We’ve shown that any nonempty finite set has the same number of even and odd subsets, because we can always match them up perfectly.

Binomial coefficient sums

And now for a small tangent. Remember that a set of size n has \binom{n}{k} subsets of size k, where \binom n k is a binomial coefficient, or an entry in Pascal’s Triangle. Since we now know there are the same number of even and odd subsets, if we add \binom{n}{k} when k is even and subtract when k is odd, we must get zero:

\displaystyle \sum_{0 \leq k \leq n} (-1)^k \binom n k = 0

That is, the alternating sum of each row of Pascal’s Triangle (besides the zeroth row) is 0:

\displaystyle \begin{array}{ccccccccccc} &&&1&-&1&&&&=&0 \\ &&1&-&2&+&1&&&=&0 \\ &1&-&3&+&3&-&1&&=&0 \\ 1&-&4&+&6&-&4&+&1&=&0 \end{array}

and so on. As you can see, this is already really obvious for odd rows, for example, 1-3+3-1, since equal numbers cancel out. This is why in some sense the proof was so easy for odd n: we just took the complement of each U. However, for even rows, such as 1-4+6-4+1, it is not as obvious: why should we have 1+6+1 = 4+4? So the proof for even n required a bit more cleverness.

In any case, this discussion of binomial coefficiencts brings us to:

The second proof

The second proof was hinted at in a comment by decourse. Recall the Binomial Theorem:

\displaystyle (x+y)^n = \sum_{0 \leq k \leq n} \binom n k x^{n-k} y^k

Set x = 1 and y = -1. Then the Binomial Theorem says that

\displaystyle 0 = 0^n = (1-1)^n = \sum_{0 \leq k \leq n} \binom n k (-1)^k.

And voila! The fact that this sum is zero is the same as saying that there are the same number of even and odd subsets. So we can see that the subset parity lemma is really just a special case of the Binomial Theorem. QED.

Onward

Which proof is better? I like them both. The second proof is really short, and would be your best bet to quickly communicate the idea of the proof to someone who already knows the Binomial Theorem. On the other hand, although it’s longer, the first proof is really cute and feels simpler somehow—it doesn’t depend on the Binomial Theorem and doesn’t even use any actual numbers or equations.

So, where are we? We’ve proved the subset parity lemma. But this means that

\displaystyle \sum_{d \mid n} \mu(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1,\end{cases}

since we saw that this amounts to looking at subsets of the set of unique prime factors of n, and adding 1 for even subsets and -1 for odd subsets.

This proves that \mu(n) = S(n) (the sum of primitive nth roots of unity) since S(n) satisfies the same equation. But it’s also a useful fact in its own right, will play an important role in our continuing exploration of the Möbius function \mu.

Posted in arithmetic, combinatorics, complex numbers, primes, proof | Tagged , , , , , , , , , | 2 Comments

The Möbius function proof, part 1

In my last post, I introduced the Möbius function \mu(n), which is defined in terms of the prime factorization of n:

  • \mu(n) = 0 if n has any repeated prime factors, that is, if n is divisible by a perfect square.
  • Otherwise, if n has k distinct prime factors, \mu(n) = (-1)^k. In other words, \mu(n) = 1 if n has an even number of distinct prime factors, and -1 if the number of prime factors is odd.

Previously, we also considered the sum S(n) of all the primitive nth roots of unity. Today, we will begin proving they are the same: S(n) = \mu(n) for all n \geq 1! This is surprising since it is not immediately obvious what the two functions have to do with each other: one is about sums of complex numbers, and the other is about prime factorization. On the other hand, perhaps it is not all that surprising after all: we’ve already seen that primitive roots of unity have a lot to do with divisibility and prime factorization. In any case, this connection is the key that cracks open a lot of really cool mathematics.

So, how will we do it? Recall that we proved S(n) satisfies this equation:

\displaystyle \sum_{d \mid n} S(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1.\end{cases}

In the case of S(n) this was fairly “obvious”: we proved it essentially by staring at some pictures. But we will show that \mu(n) also satisfies this equation, that is,

\displaystyle \sum_{d \mid n} \mu(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1.\end{cases}

(In contrast to S(n), this is not so obvious! But it’s true, and we will prove it.) Given a starting value for n = 1, we have seen that this equation uniquely determines all values of a function for n > 1: for example, given S(1), we can use the equation to uniquely determine S(2); given S(1) and S(2), we can then uniquely determine S(3); and so on. So, if S(n) and \mu(n) both satisfy this equation, they must be the same for all n as long as they have the same starting value—and they do, since S(1) = \mu(1) = 1.

I have to say that this is one of my favorite proofs—it rests on a beautiful little combinatorial argument, and, as we’ll see, it gives some insight into the Principle of Inclusion-Exclusion as well. We’ll start the proof today and then see the really beautiful part in my next post. So, without further ado:

Theorem. \displaystyle \sum_{d \mid n} \mu(d) = \begin{cases}1 \qquad n=1 \\ 0 \qquad n>1.\end{cases}

Corollary. \mu(n) = S(n) for all n \geq 1.

Proof. Since \mu(1) = 1, we can already see that \sum_{d \mid 1} \mu(d) = \mu(1) = 1 as desired. So now consider \sum_{d \mid n} \mu(d) for some n > 1; we wish to show this sum is zero.

Since \mu(d) = 0 for values of d which are divisble by a perfect square, those values of d contribute nothing to the sum. We can therefore concentrate on only those d which are not divisible by a perfect square (we call such numbers squarefree). Each squarefree d must be a product of distinct prime factors, since if any prime factor was repeated, then d would be divisible by a perfect square. Since d \mid n, the prime factorization of each squarefree d is therefore a subset of the set of distinct prime factors of n, and conversely, each subset of the distinct prime factors of n yields a distinct squarefree divisor d.

For example, consider n = 360 = 2^3 \cdot 3^2 \cdot 5. Each squarefree divisor of n can have at most one copy of each of 2, 3, and 5. There are therefore 2^3 = 8 squarefree divisors of n, one corresponding to each subset of \{2, 3, 5\}:

\displaystyle \begin{array}{rcr} d & \text{factorization} & \mu(d) \\ 1 & & 1 \\ 2 & 2 & -1 \\ 3 & 3 & -1 \\ 5 & 5 & -1 \\ 6 & 2 \cdot 3 & 1 \\ 10 & 2 \cdot 5 & 1 \\ 15 & 3 \cdot 5 & 1 \\ 30 & 2 \cdot 3 \cdot 5 & -1 \end{array}

Adding up all the values of \mu(d) listed in the right column, we can see that there are four 1’s and four -1’s, which cancel to yield 0 as claimed.

As another example, for n = 21 = 3 \cdot 7, all of the divisors of n are squarefree: we have

\displaystyle \mu(1) + \mu(3) + \mu(7) + \mu(3 \cdot 7) = 1 - 1 - 1 + 1 = 0.

Our goal is to show that this sum is always zero, and the only way for it to be zero is if there are always an equal number of 1’s and -1’s. Given the definition of \mu, we know that the 1’s come from even-sized subsets of the distinct prime factors of n, and the -1’s come from odd-sized subsets. So in other words, we need to show that there are the same number of even-sized subsets as there are odd-sized subsets.

The fact that these are sets of distinct prime factors really makes no difference, since we are just counting them. What we really need, in essence, is the following lemma:

Subset parity lemma: any nonempty finite set has an equal number of even-sized and odd-sized subsets.

Can you see how to prove this? (And as an aside, do you see why it is false if we exclude the word “nonempty”? Do you see what this has to do with the n=1 versus n > 1 cases?) My next post will explain a nice proof of the lemma and then conclude the proof that \mu(n) = S(n)!

Posted in Uncategorized | Tagged , , , , , , , , , | 9 Comments