In my previous post I said “recall the MacLaurin series for :”
Since someone asked in a comment, I thought it was worth mentioning where this comes from. It would typically be covered in a second-semester calculus class, but it’s possible to understand the idea with only a very basic knowledge of derivatives.
First, recall the derivatives and . Continuing, this means that the third derivative of is , and the derivative of that is again. So the derivatives of repeat in a cycle of length 4.
Now, suppose that an infinite series representation for exists (it’s not at all clear, a priori, that it should, but we’ll come back to that). That is, something of the form
What could this possibly look like? We can use what we know about and its derivatives to figure out that there is only one possible infinite series that could work.
First of all, we know that . When we plug into the above infinite series, all the terms with in them cancel out, leaving only : so must be .
Now if we take the first derivative of the supposed infinite series for , we get
We know the derivative of is , and : hence, using similar reasoning as before, we must have . So far, we have
Now, the second derivative of is . If we take the second derivative of this supposed series for , we get
Again, since this should be , if we substitute we ought to get zero, so must be zero.
Taking the derivative a third time yields
and this is supposed to be , so substituting ought to give us : in order for that to happen we need , and hence .
To sum up, so far we have discovered that
Do you see the pattern? When we take the th derivative, the constant term is going to end up being (because it started out as and then went through successive derivative operations before the term disappeared: ). If is even, the th derivative will be , and so the constant term should be zero; hence all the even coefficients will be zero. If is odd, the th derivative will be , and so the constant term should be : hence , so , with the signs alternating back and forth. And this produces exactly what I claimed to be the expansion for :
Using some other techniques from calculus, we can prove that this infinite series does in fact converge to , so even though we started with the potentially bogus assumption that such a series exists, once we have found it we can prove that it is in fact a valid representation of . It turns out that this same process can be performed to turn almost any function into an infinite series, which is called the Taylor series for the function (a MacLaurin series is a special case of a Taylor series). For example, you might like to try figuring out the Taylor series for , or for (using the fact that is its own derivative).