Sigma notation ninja tricks 2: splitting sums

[Previous posts in this series: jumping constants]

Trick 2: splitting sums

I’ve written about this before, but it’s worth spelling it out for completeness’ sake. If you have a sum of something which is itself a sum, like this:

\displaystyle \sum (A + B)

you can split it up into two separate sums:

\displaystyle \sum (A + B) = \left( \sum A \right) + \left( \sum B \right)

(You can also sort of think of this as the sigma “distributing” over the sum.) For example,

\displaystyle \sum_{k=1}^n (k + 2) = \left( \sum_{k=1}^n k \right) + \left( \sum_{k=1}^n 2 \right).

Why is this? Last time, the fact that we can pull constants in and out of a sigma came down to a property of addition, namely, that multiplication distributes over it. This, too, turns out to come down to some other properties of addition. As before, let’s think about writing out these sums explicitly, without sigma notation.

First, on the left-hand side, we have something like

\displaystyle \sum_i (A_i + B_i) = (A_1 + B_1) + (A_2 + B_2) + (A_3 + B_3) + \dots

And on the right-hand side, we have

\displaystyle \left(\sum_i A_i\right) + \left(\sum_i B_i \right) = (A_1 + A_2 + A_3 + \dots) + (B_1 + B_2 + B_3 + \dots)

We can see that we get all the same terms, but in a different order. But as we are all familiar with, the order in which you add things up doesn’t matter: addition is both associative and commutative, so we can freely reorder terms and still get the same sum.1

So \sum “distributes over” sums! Let’s use the example from above to see how this can be useful. Suppose we want to figure out a closed-form expression for

\displaystyle \sum_{k=1}^n (k + 2).

If we didn’t otherwise know how to proceed we could certainly start by trying some examples and looking for a pattern. Or we could even be a bit more sophisticated and notice that this sum will be 3 + 4 + 5 + \dots + (n+2), so it must be 3 less than the triangular number 1 + 2 + 3 + \dots + (n+2). But we don’t even need to be this clever. If we just distribute the sigma over the addition, we transform the expression into two simpler sums which are easier to deal with on their own:

\displaystyle \sum_{k=1}^n (k + 2) = \left( \sum_{k=1}^n k \right) + \left( \sum_{k=1}^n 2 \right)

The first sum is 1 + 2 + 3 + \dots + n, that is, the nth triangular number, which is equal to n(n+1)/2. The second sum is just 2 + 2 + \dots + 2 (n times), so it is equal to 2n. Thus, an expression for the entire sum is

\displaystyle \frac{n(n+1)}{2} + 2n = \frac{n(n+1) + 4n}{2} = \frac{n(n+5)}{2}.

As a double check, is this indeed three less than the (n+2)nd triangular number?

\displaystyle \frac{(n+2)(n+3)}{2} - 3 = \frac{n^2 + 5n + 6}{2} - \frac{6}{2} = \frac{n(n+5)}{2}

Sure enough! Math works!


  1. At least, as long as the sum is finite! This still works for some infinite sums too, but we have to be careful about convergence.

Posted in algebra, arithmetic | Tagged , , , , , | 3 Comments

Book reviews: The Joy of SET and Elements of Mathematics

I have a couple of book reviews for you today! I finished both of these books recently and really enjoyed them. Though they are quite different, both gave me new ways to think about some topics I already knew, and in particular helped me make new connections between elementary and advanced concepts.

[Disclosure of Material Connection: Princeton Press kindly provided me with free review copies of these books. I was not required to write a positive review. The opinions expressed are my own.]

The Joy of SET

Liz McMahon, Gary Gordon, Hannah Gordon, and Rebecca Gordon
Princeton University Press, 2016

Most people are probably familiar with the card game SET: each card has four attributes (number, color, shading, shape) each of which can have one of three values, for a total of 3^4 = 81 cards. The goal is to find “sets”, which consist of three cards where each attribute is either identical on all three cards, or distinct on all three cards. It’s a fun game, and because it has to do with combinations of things and pattern recognition, many people probably have the intuitive sense that it’s a “mathy” sort of game, or the sort of game that people who enjoy math would also enjoy

Well, it turns out, as the authors convincingly demonstrate, that the mathematics behind SET actually goes very deep. For example, did you know that there are exactly 3^{n-1}(3^n - 1)/2 distinct SETs in an n-dimensional version of the game? (The normal game that everyone plays has n = 4.) How about the fact that the SET deck is a concrete model of the four-dimensional affine geometry AG(4,3)? Did you know that the most cards you can have without a SET is 20, and that this is intimately connected to structures called maximal caps in affine geometries—and that no one knows how many cards you could have without a SET in a 7-dimensional (or higher) version of the game?

The authors explain all this, and much more (with a lot of humor1 along the way!), ranging through probability, modular arithmetic, combinatorics, geometry, linear algebra, and a bunch of other topics. The book begins gently, but by the end it gets into some fairly deep mathematics, and there are lots of exercises and projects at the end of each chapter. This book would make a fantastic resource for a middle school, high school, or undergraduate math club. I could even see using it as the textbook for some sort of extra/special topics class with some motivated students.

Elements of Mathematics

John Stillwell
Princeton University Press, 2016

I am a huge fan of Stillwell’s writing (almost six years ago I wrote a short review of another one of his books, Roads to Infinity) and I wasn’t disappointed. This book is definitely aimed at a more sophisticated audience than the SET book, but due to Stillwell’s lucid explanations it still manages to start out rather gently and holds many treasures even for the intrepid high school reader.

The book has two basic goals. The first is to simply lay out an overview of “elementary” mathematics, accessible in theory to anyone with a high school level mathematical background. “Elementary” mathematics refers not just to the sort of mathematics learned in grade school (arithmetic, fractions, and so on) but to the mathematics that would nowadays be viewed as “basic” by professional mathematicians—the sort of stuff that every professional mathematician is familiar with regardless of their specialty. In this respect the book is quite a tour de force, organized by areas of mathematics—arithmetic, computation, algebra, geometry, calculus, and so on—and in each area Stillwell manages to distill down the big ideas and the connections with other areas. He is a master expositor, and the text manages to be engaging and accessible without watering down the mathematics. I definitely learned new things from the book! One thing Stillwell does very well in particular is to explain not just the big ideas but the connections between them.

The other basic goal of the book is to explore the boundary between “elementary” and “advanced” mathematics. This sounds like it would be rather vague and amorphous—after all, aren’t the notions of “elementary” and “advanced” quite relative? Doesn’t it depend on how much background you have? Can’t math that is “elementary” to one person be “advanced” to someone else? This is all true, but Stillwell isn’t really talking about which areas of math are hard and which are easy. Professional mathematicians often talk about certain proofs being “elementary”, and it is often celebrated when someone finds an “elementary” proof of a theorem, even if that theorem had already been proved by “non-elementary” means, and even if the non-elementary proof was shorter. Stillwell is trying to pin down a precise meaning of this sense of “elementary”, and makes a well-reasoned case that it all comes down to infinity: something is non-elementary precisely when infinity enters into its proof in a fundamental way. This may seem rather arbitrary at first blush, but through a number of examples and surprising connections between different areas of mathematics, Stillwell makes it clear that this is an extremely “natural” place to draw a line in the sand. Not that having such a dividing line is in and of itself of any value—it’s simply fascinating to note that there is such a natural line at all, and by exploring it in depth we shed new light on the mathematics to either side of it.


  1. They are extremely fond of footnotes. Reminds me of someone I know.

Posted in books, review | Tagged , , , , , , , | 1 Comment

Sigma notation ninja tricks 1: jumping constants

Almost exactly ten years ago, I wrote a page on this blog explaining big-sigma notation. Since then it’s consistently been one of the highest-traffic posts on my blog, and still gets occasional comments and questions. A few days ago, a commenter named Kevin asked,

Could you explain how to take a constant outside of a summation and bring it inside the summation?

This made me realize there’s a lot more still to be explained! In particular, understanding what sigma notation means is one thing, but becoming fluent in its use requires learning a number of “tricks”. Of course, as always, they’re not really “tricks” at all: understanding what the notation means is the necessary foundation for understanding why the tricks work!

Trick 1: jumping constants

For today, we’ll start by considering what Kevin asked about. Consider what is meant by this sigma notation:

\displaystyle \sum_{i=1}^{4} c X_i

It doesn’t really matter what the X’s are, the point is just that each X_i might be different, whereas c is a constant that doesn’t change. So this can be expanded as

\displaystyle \sum_{i=1}^{4} c X_i = c X_1 + c X_2 + c X_3 + c X_4

Since multiplication distributes over addition, we can factor out the c:

c X_1 + c X_2 + c X_3 + c X_4 = c (X_1 + X_2 + X_3 + X_4)

The right-hand side can now be written as

\displaystyle c \left( \sum_{i=1}^4 X_i \right),

so overall we have shown that

\displaystyle \sum_{i=1}^4 c X_i = c \left(\sum_{i=1}^4 X_i\right).

We usually omit the parentheses and just write

\displaystyle c \sum_{i=1}^4 X_i.

Our argument didn’t really depend on any of the specifics (like the fact that i goes from 1 to 4). The general principle is that constants can “jump” back and forth across the sigma, which corresponds to multiplication distributing across addition.

The one remaining question is—what counts as a “constant”? The answer is, anything that doesn’t depend on the index variable. So the “constant” can even involve some variables, as long as they are other variables! For example,

\displaystyle \sum_{i = 1}^k (n^2 + k) g(i) = (n^2 + k) \sum_{i=1}^k g(i)

In the context of this sum, n^2 + k is a “constant”, because it does not have i in it. Since it doesn’t contain i, it is going to be exactly the same for each term of the sum, which means it can be factored out.

Posted in algebra, arithmetic | Tagged , , , , | 1 Comment

The Riemann zeta function and prime numbers

In a previous post I defined the famous Riemann zeta function,

\displaystyle \displaystyle \zeta(s) = \sum_{n \geq 1} \frac{1}{n^s}.

Today I want to give you a glimpse of what it has to do with prime numbers—which is a big part of why it is so famous.

Consider the infinite product

\displaystyle \left(\frac{1}{1-2^{-s}}\right)\left(\frac{1}{1-3^{-s}}\right)\left(\frac{1}{1-5^{-s}}\right)\left(\frac{1}{1-7^{-s}}\right) \dots \left(\frac{1}{1-p^{-s}}\right) \dots

where each sequential factor has the next prime number raised to the -s power. Using big-Pi notation, we can write this infinite product more concisely as

\displaystyle \prod_{p \text{ prime}} \frac{1}{1 - p^{-s}}

(The big \Pi means a \Piroduct just like a big \Sigma means a \Sigmaum.)

Now let’s do a bit of algebra. First, recall that the infinite geometric series 1 + r + r^2 + r^3 + \dots has the sum

\displaystyle 1 + r + r^2 + r^3 + \dots = \frac{1}{1-r},

as long as r < 1. (For some hints on how to derive this formula if you haven’t seen it before, see this blog post or this one.) Of course, 1/(1-p^{-s}) is of this form, with r = p^{-s}. Note that p^{-s} = 1/p^s which is less than 1 as long as s > 0, so the geometric series formula applies, and we have

\displaystyle \prod_p \frac{1}{1 - p^{-s}} = \prod_p (1 + p^{-s} + p^{-2s} + p^{-3s} + \dots)

(From now on I’ll just write \prod_p instead of \prod_{p\text{ prime}}.) That is,

\displaystyle (1 + 2^{-s} + 2^{-2s} + \dots)(1 + 3^{-s} + 3^{-2s} + \dots)(1 + 5^{-s} + 5^{-2s} + \dots) \dots

So this is an infinite product of infinite sums! But you’re not scared of a little infinity, are you? Good, I thought not. Now, what would happen if we “multiplied out” this infinite product of infinite sums? Note that every term in the result would come from picking one term of the form p^{-ks} from each of the factors, one for each prime p, and multiplying them. (Though infinitely many of the choices have to be 1 = p^{-0s} if we are to get a finite term as a result.) For example, one way to choose terms would be

\displaystyle 1 \cdot 3^{-2s} \cdot 5^{-s} \cdot 1 \cdot 13^{-3s} \cdot 1 \cdot 1 \cdot \dots

which would give us (3^2 \cdot 5 \cdot 13)^{-s} = 585^{-s}. In fact, because of the Fundamental Theorem of Arithmetic (every positive integer has a distinct prime factorization), each choice gives us the prime factorization of a different positive integer, and conversely, every positive integer shows up exactly once. That is, after multiplying everything out, we get one term of the form n^{-s} for each positive integer n:

\displaystyle \prod_p \frac{1}{1 - p^{-s}} = \sum_{n \geq 1} n^{-s} = \sum_{n \geq 1} \frac{1}{n^s}

But that’s just our old friend \zeta(s)! So in fact,

\displaystyle \zeta(s) = \prod_p \frac{1}{1 - p^{-s}}

turns out to be an equivalent way to write the Riemann zeta function.

We can now use this in a really cute proof that there are infinitely many primes. Consider \zeta(1), where we substitute 1 for s. In our original definition of \zeta, we get

\displaystyle \zeta(1) = \sum_{n \geq 1} \frac{1}{n} = 1 + 1/2 + 1/3 + 1/4 + 1/5 + \dots

This is known as the harmonic series, and it is a well-known fact that it diverges, that is, as you keep adding up more and more terms of the series, the sum keeps getting bigger and bigger without bound. Put another way, pick any number you like—a hundred, a million, a trillion—and eventually, if you keep adding long enough, the sum 1 + 1/2 + 1/3 + 1/4 + 1/5 + \dots will become bigger than your chosen number. (Though you may have to wait a very long time—the harmonic series diverges rather slowly indeed!) One way to prove this is to note that the series 1 + 1/2 + 1/3 + 1/4 + 1/5 + \dots is greater than

\displaystyle 1 + 1/2 + (1/4 + 1/4) + (1/8 + 1/8 + 1/8 + 1/8) + (1/16 + \dots) + \dots

(the original series is greater than this because we only made some of its terms smaller—I changed the 1/3 into 1/4, and then changed 1/5 through 1/7 into 1/8, and then 1/9 through 1/15 into 1/16, and so on). But this new smaller series is equal to 1 + 1/2 + 1/2 + 1/2 + 1/2 + \dots which will clearly get arbitrarily large. So the harmonic series, which is larger, must diverge as well.

So, \zeta(1) diverges. But what happens if we plug s = 1 into the other expression for \zeta(s)? We get

\displaystyle \prod_p \frac{1}{1 - p^{-1}} = \prod_p \frac{1}{1 - 1/p} = \prod_p \frac{p}{p-1}.

If there were only finitely many primes, this would be a finite product of some fractions and would thus have some definite, finite value—but we know it has to diverge! Thus there must be infinitely many primes.

I will note one other thing—when I was writing up some notes for this post I was initially confused by the fact that if we set, say, s = 2, we already know that \zeta(2) = \pi^2/6; but now we also know that

\displaystyle \zeta(2) = \prod_p \frac{p^2}{p^2 - 1} = \frac{4}{3} \cdot \frac{9}{8} \cdot \frac{25}{24} \cdot \frac{49}{48} \cdot \frac{121}{120} \dots

But this is an infinite product of fractions which are all bigger than 1! How could it converge? …well, my intuition was just playing tricks with me. Although I have lots of practice thinking about infinite sums that converge, I am just not used to thinking about infinite products that converge. But in the end it is not really any more surprising than the fact that an infinite sum can converge even though all its terms are positive: as long as the fractions are getting smaller quickly enough, such an infinite product certainly can converge, and in fact it does. Using a computer confirms that the more terms of this product we include, the closer the product gets to \pi^2 / 6 \approx 1.64493\dots

Posted in number theory | Tagged , , , , | 15 Comments

Games with factorization diagram cards

Since I published a deck of factorization diagram cards last September, a few teachers have picked up copies of the cards and started using them with their students. I’ve started collecting ideas for games you can play using the cards, and want to share here a few game ideas from Alex Ford who teaches middle school in St. Paul, Minnesota.

If you want to get your own set you can buy one here! Also, if you have any other ideas for games or activities using the cards, please send them my way.

War

First, you can play a classic game of War. The twist is that while playing you should only look at the diagram side of the cards, not the side with the written-out number. So part of the game is figuring out which factorization diagram represents a bigger number. One could of course just work out what each number is and then compare, but I imagine students may also find tricks they can use to decide which is bigger without fully working out what the numbers are.

Variant 1: primes are wild, that is, primes always beat composite numbers. (If you have two primes or two composite numbers, then the higher one beats the lower one as usual.) This may actually make the game a bit easier, since when a prime is played you don’t actually need to work out the value of any composite number played in opposition to it.

Variant 2: like variant 1, except that primes only beat those composite numbers which don’t have them as a factor. For example, 5 beats 24, but 5 loses to 30: since 30 has 5 as a prime factor it is “immune” to 5.

As a fun follow-on activity to variant 2, try listing the cards in order according to which beats which!1

Set

Alex and his students came up with a fun variant on SET. Start by dealing out twelve factorization cards, diagram-side-up. Like the usual SET game, the aim is to find and claim sets of three cards. The difference is in how sets are defined. A “set” of factorization cards is any set of three cards that either

  1. Share no prime factors in common (that is, any given prime occurs on at most one of the cards), or
  2. Share all their prime factors in common (each prime that appears on any of the cards must appear on all three).

Here are a few examples of valid sets:

And here are a few invalid sets:

In order to claim a set you have to state the number on each card and explain why they form a set. If you are correct, remove the cards and deal three new cards. If you are incorrect, keep looking!

Alex and his students found that, just as with the classic SET game, it is possible to have a layout of twelve cards containing no set. For example, here’s the layout they found:

Just to double-check, I confirmed with a computer program that the above layout indeed contains no valid sets. As with the usual SET, if you find yourself in a situation where everyone agrees there are no sets, you can just deal out three more cards.

The natural follow-up question is: what’s the largest possible layout with no sets? So far, this is an open question!


  1. 😉

Posted in arithmetic, counting, games, pattern, pictures, primes, teaching | Tagged , , , , , | 4 Comments

The MacLaurin series for sin(x)

In my previous post I said “recall the MacLaurin series for \sin x:”

\displaystyle \sin x = x  - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots

Since someone asked in a comment, I thought it was worth mentioning where this comes from. It would typically be covered in a second-semester calculus class, but it’s possible to understand the idea with only a very basic knowledge of derivatives.

First, recall the derivatives \sin'(x) = \cos(x) and \cos'(x) = -\sin(x). Continuing, this means that the third derivative of \sin(x) is -\cos(x), and the derivative of that is \sin(x) again. So the derivatives of \sin(x) repeat in a cycle of length 4.

Now, suppose that an infinite series representation for \sin(x) exists (it’s not at all clear, a priori, that it should, but we’ll come back to that). That is, something of the form

\displaystyle \sin(x) = a_0 + a_1x + a_2x^2 + a_3x^3 + \dots

What could this possibly look like? We can use what we know about \sin(x) and its derivatives to figure out that there is only one possible infinite series that could work.

First of all, we know that \sin(0) = 0. When we plug x=0 into the above infinite series, all the terms with x in them cancel out, leaving only a_0: so a_0 must be 0.

Now if we take the first derivative of the supposed infinite series for \sin(x), we get

\displaystyle a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + \dots

We know the derivative of \sin(x) is \cos(x), and \cos(0) = 1: hence, using similar reasoning as before, we must have a_1 = 1. So far, we have

\displaystyle \sin(x) = x + a_2x^2 + a_3x^3 + \dots

Now, the second derivative of \sin(x) is -\sin(x). If we take the second derivative of this supposed series for \sin(x), we get

\displaystyle 2a_2 + (3 \cdot 2)a_3 x + (4 \cdot 3)a_4 x^2 + \dots

Again, since this should be -\sin(x), if we substitute x = 0 we ought to get zero, so a_2 must be zero.

Taking the derivative a third time yields

\displaystyle (3 \cdot 2) a_3 + (4 \cdot 3 \cdot 2)a_4 x + (5 \cdot 4 \cdot 3) a_5 x^2 + \dots

and this is supposed to be -\cos(x), so substituting x = 0 ought to give us -1: in order for that to happen we need (3 \cdot 2)a_3 = -1, and hence a_3 = -1/6.

To sum up, so far we have discovered that

\displaystyle \sin(x) = x - \frac{x^3}{6} + a_4x^4 + a_5x^5 + \dots

Do you see the pattern? When we take the nth derivative, the constant term is going to end up being n! \cdot a_n (because it started out as a_n x^n and then went through n successive derivative operations before the x term disappeared: a_n x^n \to n a_n x^{n-1} \to (n \cdot (n-1)) a_n x^{n-2} \to \dots \to n! \cdot a_n). If n is even, the nth derivative will be \pm \sin(x), and so the constant term should be zero; hence all the even coefficients will be zero. If n is odd, the nth derivative will be \pm \cos(x), and so the constant term should be \pm 1: hence n! \cdot a_n = \pm 1, so a_n = \pm 1/n!, with the signs alternating back and forth. And this produces exactly what I claimed to be the expansion for \sin x:

\displaystyle \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots

Using some other techniques from calculus, we can prove that this infinite series does in fact converge to \sin x, so even though we started with the potentially bogus assumption that such a series exists, once we have found it we can prove that it is in fact a valid representation of \sin x. It turns out that this same process can be performed to turn almost any function into an infinite series, which is called the Taylor series for the function (a MacLaurin series is a special case of a Taylor series). For example, you might like to try figuring out the Taylor series for \cos x, or for e^x (using the fact that e^x is its own derivative).

Posted in calculus, infinity, iteration | Tagged , , , , , , , | 4 Comments

The Basel problem

I wanted to follow up on something I mentioned in my previous post: I claimed that

\displaystyle \displaystyle \zeta(2) = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \dots = \frac{\pi^2}{6}.

At the time I didn’t know how to prove this, but I did some quick research and today I’m going to explain it! It turns out that determining the value of this infinite sum was a famous open question from the mid-1600s until it was solved by Leonhard Euler in 1734. It is now known as the Basel problem (it’s not clear to me whether it was called that when Euler solved it). Since then, there have been many different proofs using all sorts of techniques, but I think Euler’s original proof is still the easiest to follow (though it turns out to implicitly rely on some not-so-obvious assumptions, so a completely formal proof is still quite tricky). I learned about this proof from some slides by Brendan Sullivan and an accompanying document.

First, recall the MacLaurin series for \sin x:

\displaystyle \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots

This infinite sum continues forever with successive odd powers of x, alternating between positive and negative. (If you’ve never seen this before, you can take my word for it I suppose; if anyone asks in a comment I would be happy to write another post explaining where this comes from.)

If we substitute \pi x for x we get

\displaystyle \sin(\pi x) = \pi x - \frac{(\pi x)^3}{3!} + \frac{(\pi x)^5}{5!} - \frac{(\pi x)^7}{7!} + \dots

Note that the coefficient of x^3 is -\pi^3 / 3! = -\pi^3/6. Remember that—it will return later!

Now, recall that for finite polynomials, the Fundamental Theorem of Algebra tells us that we can always factor them into a product of linear factors, one for each root (technically, this is only true if we allow for complex roots, though we won’t need that fact here). For example, consider the polynomial

\displaystyle 2x^3 - 3x^2 - 11x + 6.

It turns out that this has zeros at x = 3, -2, and 1/2, as you can verify by plugging in those values for x. By the Fundamental Theorem, this means it must be possible to factor this polynomial as

\displaystyle 2(x-3)(x+2)(x-1/2).

Note how each factor corresponds to one of the roots: when x = 3, then (x-3) is zero, making the whole product zero; when x = -2, the (x+2) becomes zero, and so on. We also had to put in a constant multiple of 2, to make sure the coefficient of x^3 is correct.

So, we can always factorize finite polynomials in this way. Can we do something similar for infinite polynomials, like the MacLaurin series for \sin(\pi x)? Euler guessed so. It turns out the answer is “yes, under certain conditions”, but this is not at all obvious. This is known as the Weierstrass factorization theorem, but I won’t get into the details. You can just take it on faith that it works in this case, so we can “factorize” the MacLaurin series for \sin(\pi x), getting one linear factor for each root, that is, for each integer value of x:

\displaystyle \displaystyle \sin(\pi x) = \pi x (1 - x)(1 + x)\left (1 - \frac{x}{2} \right) \left(1 + \frac{x}{2} \right) \left(1 - \frac{x}{3}\right) \left(1 + \frac{x}{3}\right) \dots

For example, x = 3 makes the (1 - x/3) term zero, and in general x = n will make the (1 - x/n) term zero. Note how we also included a factor of x, corresponding to the root at x = 0. We also have to include a constant factor of \pi: this means that the coefficient of x^1 in the resulting sum (obtained by multiplying the leading \pi x by all the copies of 1) will be \pi, as it should be.

Now, since (a-b)(a+b) = a^2 - b^2 we can simplify this as

\displaystyle \sin(\pi x) = \pi x (1 - x^2) \left(1 - \frac{x^2}{4} \right) \left( 1 - \frac{x^2}{9} \right) \dots

Let’s think about what the coefficient of x^3 will be once this infinite product is completely distributed out and like degrees of x are collected. The only way to get an x^3 term is by multiplying the initial \pi x by a single term of the form -x^2/n^2, and then a whole bunch of 1’s. There is one way to do this for each possible n \geq 1. All told, then, we are going to have

\displaystyle \sin(\pi x) = \pi x - \pi x^3 \left(1 + \frac{1}{4} + \frac{1}{9} + \dots \right) + \dots

And now we’re almost done: recall that previously, by considering the MacLaurin series, we concluded that the coefficient of x^3 in \sin(\pi x) is -\pi^3 / 6. But looking at it a different way, we have now concluded that the coefficient is -\pi(1 + 1/4 + 1/9 + \dots). Setting these equal to each other, and dividing both sides by -\pi, we conclude that

\displaystyle \zeta(2) = 1 + \frac 1 4 + \frac 1 9 + \dots = -\frac{\pi^3}{6} \cdot \frac{1}{-\pi} = \frac{\pi^2}{6}.

Magic!

Posted in infinity, number theory | Tagged , , , , , | 9 Comments