The MacLaurin series for sin(x)

In my previous post I said “recall the MacLaurin series for \sin x:”

\displaystyle \sin x = x  - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots

Since someone asked in a comment, I thought it was worth mentioning where this comes from. It would typically be covered in a second-semester calculus class, but it’s possible to understand the idea with only a very basic knowledge of derivatives.

First, recall the derivatives \sin'(x) = \cos(x) and \cos'(x) = -\sin(x). Continuing, this means that the third derivative of \sin(x) is -\cos(x), and the derivative of that is \sin(x) again. So the derivatives of \sin(x) repeat in a cycle of length 4.

Now, suppose that an infinite series representation for \sin(x) exists (it’s not at all clear, a priori, that it should, but we’ll come back to that). That is, something of the form

\displaystyle \sin(x) = a_0 + a_1x + a_2x^2 + a_3x^3 + \dots

What could this possibly look like? We can use what we know about \sin(x) and its derivatives to figure out that there is only one possible infinite series that could work.

First of all, we know that \sin(0) = 0. When we plug x=0 into the above infinite series, all the terms with x in them cancel out, leaving only a_0: so a_0 must be 0.

Now if we take the first derivative of the supposed infinite series for \sin(x), we get

\displaystyle a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + \dots

We know the derivative of \sin(x) is \cos(x), and \cos(0) = 1: hence, using similar reasoning as before, we must have a_1 = 1. So far, we have

\displaystyle \sin(x) = x + a_2x^2 + a_3x^3 + \dots

Now, the second derivative of \sin(x) is -\sin(x). If we take the second derivative of this supposed series for \sin(x), we get

\displaystyle 2a_2 + (3 \cdot 2)a_3 x + (4 \cdot 3)a_4 x^2 + \dots

Again, since this should be -\sin(x), if we substitute x = 0 we ought to get zero, so a_2 must be zero.

Taking the derivative a third time yields

\displaystyle (3 \cdot 2) a_3 + (4 \cdot 3 \cdot 2)a_4 x + (5 \cdot 4 \cdot 3) a_5 x^2 + \dots

and this is supposed to be -\cos(x), so substituting x = 0 ought to give us -1: in order for that to happen we need (3 \cdot 2)a_3 = -1, and hence a_3 = -1/6.

To sum up, so far we have discovered that

\displaystyle \sin(x) = x - \frac{x^3}{6} + a_4x^4 + a_5x^5 + \dots

Do you see the pattern? When we take the nth derivative, the constant term is going to end up being n! \cdot a_n (because it started out as a_n x^n and then went through n successive derivative operations before the x term disappeared: a_n x^n \to n a_n x^{n-1} \to (n \cdot (n-1)) a_n x^{n-2} \to \dots \to n! \cdot a_n). If n is even, the nth derivative will be \pm \sin(x), and so the constant term should be zero; hence all the even coefficients will be zero. If n is odd, the nth derivative will be \pm \cos(x), and so the constant term should be \pm 1: hence n! \cdot a_n = \pm 1, so a_n = \pm 1/n!, with the signs alternating back and forth. And this produces exactly what I claimed to be the expansion for \sin x:

\displaystyle \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots

Using some other techniques from calculus, we can prove that this infinite series does in fact converge to \sin x, so even though we started with the potentially bogus assumption that such a series exists, once we have found it we can prove that it is in fact a valid representation of \sin x. It turns out that this same process can be performed to turn almost any function into an infinite series, which is called the Taylor series for the function (a MacLaurin series is a special case of a Taylor series). For example, you might like to try figuring out the Taylor series for \cos x, or for e^x (using the fact that e^x is its own derivative).

About Brent

Associate Professor of Computer Science at Hendrix College. Functional programmer, mathematician, teacher, pianist, follower of Jesus.
This entry was posted in calculus, infinity, iteration and tagged , , , , , , , . Bookmark the permalink.

4 Responses to The MacLaurin series for sin(x)

  1. Max says:

    > For example, you might like to try figuring out the Taylor series for \cos x, or for e^x (using the fact that e^x is its own derivative).

    And if you know that e^{ix} = \cos x + i\sin x you only need to do one of them, and can use this equation to find the other.

    • Brent says:

      Yes! I really enjoy showing the relationship between e^{ix} and the Taylor series for e^x, \sin x, and \cos x. Maybe that will have to be another post. =)

  2. Ajmain Yamin Yamin says:

    I have to admit, that’s pretty cool. I’ve always been afraid of these infinite series expansions called “Taylor” and “MacLaurin” because they seemed too complicated to be intuitive but reading this changed my perspective.

    • Brent says:

      Thanks for your comment! I am really glad to hear it. Indeed, the basic idea of Taylor/MacLaurin series is really not too hard — though they do go rather deep. But maybe armed with this new intuition you can try reading more about them and see what you can understand!

Comments are closed.