I wanted to follow up on something I mentioned in my previous post: I claimed that
At the time I didn’t know how to prove this, but I did some quick research and today I’m going to explain it! It turns out that determining the value of this infinite sum was a famous open question from the mid-1600s until it was solved by Leonhard Euler in 1734. It is now known as the Basel problem (it’s not clear to me whether it was called that when Euler solved it). Since then, there have been many different proofs using all sorts of techniques, but I think Euler’s original proof is still the easiest to follow (though it turns out to implicitly rely on some not-so-obvious assumptions, so a completely formal proof is still quite tricky). I learned about this proof from some slides by Brendan Sullivan and an accompanying document.
First, recall the MacLaurin series for :
This infinite sum continues forever with successive odd powers of , alternating between positive and negative. (If you’ve never seen this before, you can take my word for it I suppose; if anyone asks in a comment I would be happy to write another post explaining where this comes from.)
If we substitute for we get
Note that the coefficient of is . Remember that—it will return later!
Now, recall that for finite polynomials, the Fundamental Theorem of Algebra tells us that we can always factor them into a product of linear factors, one for each root (technically, this is only true if we allow for complex roots, though we won’t need that fact here). For example, consider the polynomial
It turns out that this has zeros at , , and , as you can verify by plugging in those values for . By the Fundamental Theorem, this means it must be possible to factor this polynomial as
Note how each factor corresponds to one of the roots: when , then is zero, making the whole product zero; when , the becomes zero, and so on. We also had to put in a constant multiple of 2, to make sure the coefficient of is correct.
So, we can always factorize finite polynomials in this way. Can we do something similar for infinite polynomials, like the MacLaurin series for ? Euler guessed so. It turns out the answer is “yes, under certain conditions”, but this is not at all obvious. This is known as the Weierstrass factorization theorem, but I won’t get into the details. You can just take it on faith that it works in this case, so we can “factorize” the MacLaurin series for , getting one linear factor for each root, that is, for each integer value of :
For example, makes the term zero, and in general will make the term zero. Note how we also included a factor of , corresponding to the root at . We also have to include a constant factor of : this means that the coefficient of in the resulting sum (obtained by multiplying the leading by all the copies of ) will be , as it should be.
Now, since we can simplify this as
Let’s think about what the coefficient of will be once this infinite product is completely distributed out and like degrees of are collected. The only way to get an term is by multiplying the initial by a single term of the form , and then a whole bunch of ’s. There is one way to do this for each possible . All told, then, we are going to have
And now we’re almost done: recall that previously, by considering the MacLaurin series, we concluded that the coefficient of in is . But looking at it a different way, we have now concluded that the coefficient is . Setting these equal to each other, and dividing both sides by , we conclude that