In my previous post I said “recall the MacLaurin series for :”

Since someone asked in a comment, I thought it was worth mentioning where this comes from. It would typically be covered in a second-semester calculus class, but it’s possible to understand the idea with only a very basic knowledge of derivatives.

First, recall the derivatives and . Continuing, this means that the third derivative of is , and the derivative of that is again. So the derivatives of repeat in a cycle of length 4.

Now, suppose that an infinite series representation for exists (it’s not at all clear, *a priori*, that it should, but we’ll come back to that). That is, something of the form

What could this possibly look like? We can use what we know about and its derivatives to figure out that there is only one possible infinite series that could work.

First of all, we know that . When we plug into the above infinite series, all the terms with in them cancel out, leaving only : so must be .

Now if we take the first derivative of the supposed infinite series for , we get

We know the derivative of is , and : hence, using similar reasoning as before, we must have . So far, we have

Now, the second derivative of is . If we take the second derivative of this supposed series for , we get

Again, since this should be , if we substitute we ought to get zero, so must be zero.

Taking the derivative a third time yields

and this is supposed to be , so substituting ought to give us : in order for that to happen we need , and hence .

To sum up, so far we have discovered that

Do you see the pattern? When we take the th derivative, the constant term is going to end up being (because it started out as and then went through successive derivative operations before the term disappeared: ). If is even, the th derivative will be , and so the constant term should be zero; hence all the even coefficients will be zero. If is odd, the th derivative will be , and so the constant term should be : hence , so , with the signs alternating back and forth. And this produces exactly what I claimed to be the expansion for :

Using some other techniques from calculus, we can prove that this infinite series does in fact converge to , so even though we started with the potentially bogus assumption that such a series exists, once we have found it we can prove that it is in fact a valid representation of . It turns out that this same process can be performed to turn almost any function into an infinite series, which is called the *Taylor series* for the function (a *MacLaurin series* is a special case of a Taylor series). For example, you might like to try figuring out the Taylor series for , or for (using the fact that is its own derivative).

> For example, you might like to try figuring out the Taylor series for , or for (using the fact that is its own derivative).

And if you know that you only need to do one of them, and can use this equation to find the other.

Yes! I really enjoy showing the relationship between and the Taylor series for , , and . Maybe that will have to be another post. =)

I have to admit, that’s pretty cool. I’ve always been afraid of these infinite series expansions called “Taylor” and “MacLaurin” because they seemed too complicated to be intuitive but reading this changed my perspective.

Thanks for your comment! I am really glad to hear it. Indeed, the basic idea of Taylor/MacLaurin series is really not too hard — though they do go rather deep. But maybe armed with this new intuition you can try reading more about them and see what you can understand!