Games with factorization diagram cards

Since I published a deck of factorization diagram cards last September, a few teachers have picked up copies of the cards and started using them with their students. I’ve started collecting ideas for games you can play using the cards, and want to share here a few game ideas from Alex Ford who teaches middle school in St. Paul, Minnesota.

If you want to get your own set you can buy one here! Also, if you have any other ideas for games or activities using the cards, please send them my way.

War

First, you can play a classic game of War. The twist is that while playing you should only look at the diagram side of the cards, not the side with the written-out number. So part of the game is figuring out which factorization diagram represents a bigger number. One could of course just work out what each number is and then compare, but I imagine students may also find tricks they can use to decide which is bigger without fully working out what the numbers are.

Variant 1: primes are wild, that is, primes always beat composite numbers. (If you have two primes or two composite numbers, then the higher one beats the lower one as usual.) This may actually make the game a bit easier, since when a prime is played you don’t actually need to work out the value of any composite number played in opposition to it.

Variant 2: like variant 1, except that primes only beat those composite numbers which don’t have them as a factor. For example, 5 beats 24, but 5 loses to 30: since 30 has 5 as a prime factor it is “immune” to 5.

As a fun follow-on activity to variant 2, try listing the cards in order according to which beats which!1

Set

Alex and his students came up with a fun variant on SET. Start by dealing out twelve factorization cards, diagram-side-up. Like the usual SET game, the aim is to find and claim sets of three cards. The difference is in how sets are defined. A “set” of factorization cards is any set of three cards that either

  1. Share no prime factors in common (that is, any given prime occurs on at most one of the cards), or
  2. Share all their prime factors in common (each prime that appears on any of the cards must appear on all three).

Here are a few examples of valid sets:

And here are a few invalid sets:

In order to claim a set you have to state the number on each card and explain why they form a set. If you are correct, remove the cards and deal three new cards. If you are incorrect, keep looking!

Alex and his students found that, just as with the classic SET game, it is possible to have a layout of twelve cards containing no set. For example, here’s the layout they found:

Just to double-check, I confirmed with a computer program that the above layout indeed contains no valid sets. As with the usual SET, if you find yourself in a situation where everyone agrees there are no sets, you can just deal out three more cards.

The natural follow-up question is: what’s the largest possible layout with no sets? So far, this is an open question!


  1. 😉

Posted in arithmetic, counting, games, pattern, pictures, primes, teaching | Tagged , , , , , | Leave a comment

The MacLaurin series for sin(x)

In my previous post I said “recall the MacLaurin series for \sin x:”

\displaystyle \sin x = x  - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots

Since someone asked in a comment, I thought it was worth mentioning where this comes from. It would typically be covered in a second-semester calculus class, but it’s possible to understand the idea with only a very basic knowledge of derivatives.

First, recall the derivatives \sin'(x) = \cos(x) and \cos'(x) = -\sin(x). Continuing, this means that the third derivative of \sin(x) is -\cos(x), and the derivative of that is \sin(x) again. So the derivatives of \sin(x) repeat in a cycle of length 4.

Now, suppose that an infinite series representation for \sin(x) exists (it’s not at all clear, a priori, that it should, but we’ll come back to that). That is, something of the form

\displaystyle \sin(x) = a_0 + a_1x + a_2x^2 + a_3x^3 + \dots

What could this possibly look like? We can use what we know about \sin(x) and its derivatives to figure out that there is only one possible infinite series that could work.

First of all, we know that \sin(0) = 0. When we plug x=0 into the above infinite series, all the terms with x in them cancel out, leaving only a_0: so a_0 must be 0.

Now if we take the first derivative of the supposed infinite series for \sin(x), we get

\displaystyle a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 + \dots

We know the derivative of \sin(x) is \cos(x), and \cos(0) = 1: hence, using similar reasoning as before, we must have a_1 = 1. So far, we have

\displaystyle \sin(x) = x + a_2x^2 + a_3x^3 + \dots

Now, the second derivative of \sin(x) is -\sin(x). If we take the second derivative of this supposed series for \sin(x), we get

\displaystyle 2a_2 + (3 \cdot 2)a_3 x + (4 \cdot 3)a_4 x^2 + \dots

Again, since this should be -\sin(x), if we substitute x = 0 we ought to get zero, so a_2 must be zero.

Taking the derivative a third time yields

\displaystyle (3 \cdot 2) a_3 + (4 \cdot 3 \cdot 2)a_4 x + (5 \cdot 4 \cdot 3) a_5 x^2 + \dots

and this is supposed to be -\cos(x), so substituting x = 0 ought to give us -1: in order for that to happen we need (3 \cdot 2)a_3 = -1, and hence a_3 = -1/6.

To sum up, so far we have discovered that

\displaystyle \sin(x) = x - \frac{x^3}{6} + a_4x^4 + a_5x^5 + \dots

Do you see the pattern? When we take the nth derivative, the constant term is going to end up being n! \cdot a_n (because it started out as a_n x^n and then went through n successive derivative operations before the x term disappeared: a_n x^n \to n a_n x^{n-1} \to (n \cdot (n-1)) a_n x^{n-2} \to \dots \to n! \cdot a_n). If n is even, the nth derivative will be \pm \sin(x), and so the constant term should be zero; hence all the even coefficients will be zero. If n is odd, the nth derivative will be \pm \cos(x), and so the constant term should be \pm 1: hence n! \cdot a_n = \pm 1, so a_n = \pm 1/n!, with the signs alternating back and forth. And this produces exactly what I claimed to be the expansion for \sin x:

\displaystyle \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots

Using some other techniques from calculus, we can prove that this infinite series does in fact converge to \sin x, so even though we started with the potentially bogus assumption that such a series exists, once we have found it we can prove that it is in fact a valid representation of \sin x. It turns out that this same process can be performed to turn almost any function into an infinite series, which is called the Taylor series for the function (a MacLaurin series is a special case of a Taylor series). For example, you might like to try figuring out the Taylor series for \cos x, or for e^x (using the fact that e^x is its own derivative).

Posted in calculus, infinity, iteration | Tagged , , , , , , , | 4 Comments

The Basel problem

I wanted to follow up on something I mentioned in my previous post: I claimed that

\displaystyle \displaystyle \zeta(2) = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \dots = \frac{\pi^2}{6}.

At the time I didn’t know how to prove this, but I did some quick research and today I’m going to explain it! It turns out that determining the value of this infinite sum was a famous open question from the mid-1600s until it was solved by Leonhard Euler in 1734. It is now known as the Basel problem (it’s not clear to me whether it was called that when Euler solved it). Since then, there have been many different proofs using all sorts of techniques, but I think Euler’s original proof is still the easiest to follow (though it turns out to implicitly rely on some not-so-obvious assumptions, so a completely formal proof is still quite tricky). I learned about this proof from some slides by Brendan Sullivan and an accompanying document.

First, recall the MacLaurin series for \sin x:

\displaystyle \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots

This infinite sum continues forever with successive odd powers of x, alternating between positive and negative. (If you’ve never seen this before, you can take my word for it I suppose; if anyone asks in a comment I would be happy to write another post explaining where this comes from.)

If we substitute \pi x for x we get

\displaystyle \sin(\pi x) = \pi x - \frac{(\pi x)^3}{3!} + \frac{(\pi x)^5}{5!} - \frac{(\pi x)^7}{7!} + \dots

Note that the coefficient of x^3 is -\pi^3 / 3! = -\pi^3/6. Remember that—it will return later!

Now, recall that for finite polynomials, the Fundamental Theorem of Algebra tells us that we can always factor them into a product of linear factors, one for each root (technically, this is only true if we allow for complex roots, though we won’t need that fact here). For example, consider the polynomial

\displaystyle 2x^3 - 3x^2 - 11x + 6.

It turns out that this has zeros at x = 3, -2, and 1/2, as you can verify by plugging in those values for x. By the Fundamental Theorem, this means it must be possible to factor this polynomial as

\displaystyle 2(x-3)(x+2)(x-1/2).

Note how each factor corresponds to one of the roots: when x = 3, then (x-3) is zero, making the whole product zero; when x = -2, the (x+2) becomes zero, and so on. We also had to put in a constant multiple of 2, to make sure the coefficient of x^3 is correct.

So, we can always factorize finite polynomials in this way. Can we do something similar for infinite polynomials, like the MacLaurin series for \sin(\pi x)? Euler guessed so. It turns out the answer is “yes, under certain conditions”, but this is not at all obvious. This is known as the Weierstrass factorization theorem, but I won’t get into the details. You can just take it on faith that it works in this case, so we can “factorize” the MacLaurin series for \sin(\pi x), getting one linear factor for each root, that is, for each integer value of x:

\displaystyle \displaystyle \sin(\pi x) = \pi x (1 - x)(1 + x)\left (1 - \frac{x}{2} \right) \left(1 + \frac{x}{2} \right) \left(1 - \frac{x}{3}\right) \left(1 + \frac{x}{3}\right) \dots

For example, x = 3 makes the (1 - x/3) term zero, and in general x = n will make the (1 - x/n) term zero. Note how we also included a factor of x, corresponding to the root at x = 0. We also have to include a constant factor of \pi: this means that the coefficient of x^1 in the resulting sum (obtained by multiplying the leading \pi x by all the copies of 1) will be \pi, as it should be.

Now, since (a-b)(a+b) = a^2 - b^2 we can simplify this as

\displaystyle \sin(\pi x) = \pi x (1 - x^2) \left(1 - \frac{x^2}{4} \right) \left( 1 - \frac{x^2}{9} \right) \dots

Let’s think about what the coefficient of x^3 will be once this infinite product is completely distributed out and like degrees of x are collected. The only way to get an x^3 term is by multiplying the initial \pi x by a single term of the form -x^2/n^2, and then a whole bunch of 1’s. There is one way to do this for each possible n \geq 1. All told, then, we are going to have

\displaystyle \sin(\pi x) = \pi x - \pi x^3 \left(1 + \frac{1}{4} + \frac{1}{9} + \dots \right) + \dots

And now we’re almost done: recall that previously, by considering the MacLaurin series, we concluded that the coefficient of x^3 in \sin(\pi x) is -\pi^3 / 6. But looking at it a different way, we have now concluded that the coefficient is -\pi(1 + 1/4 + 1/9 + \dots). Setting these equal to each other, and dividing both sides by -\pi, we conclude that

\displaystyle \zeta(2) = 1 + \frac 1 4 + \frac 1 9 + \dots = -\frac{\pi^3}{6} \cdot \frac{1}{-\pi} = \frac{\pi^2}{6}.

Magic!

Posted in infinity, number theory | Tagged , , , , , | 8 Comments

The Riemann zeta function

Recall from my previous post that given a function f(n), we define \zeta_f, the Dirichlet generating function of f, by

\displaystyle \displaystyle \zeta_f(s) = \sum_{n \geq 1} \frac{f(n)}{n^s}.

We also proved that \zeta_f \zeta_g = \zeta_{f \ast g}: the product of Dirichlet generating functions is the Dirichlet generating function of the Dirichlet convolution. Now, consider taking f(n) = 1 in the above definition. We get

\displaystyle \displaystyle \zeta_{\mathbf{1}}(s) = \sum_{n \geq 1} \frac{1}{n^s}

which is also often written as just plain \zeta(s). This function is quite famous: it is the Riemann zeta function. The reason it is so famous is because it is the subject of a famous unproved conjecture, the Riemann hypothesis, which in turn is famous because it has been both difficult to prove—many mathematicians have been attacking it for a long time—and deeply related to many other areas of mathmatics. In particular it is deeply related to prime numbers. If you want to understand the Riemann hypothesis better, I highly recommend reading the (truly wonderful) Secrets of Creation Trilogy, especially the first two volumes, which explain it from first principles. (I have previously written about the Secrets of Creation trilogy on this blog: a review of Volume 1 is here, and here is my review of Volume 2). In this post I just want to help you understand a few cool things about the zeta function.

Remember that \mu \ast \mathbf{1} = \varepsilon, so we must have \zeta_\mu \zeta_{\mathbf{1}} = \zeta_\varepsilon. Also recall that \varepsilon(n) = 1 when n = 1 but it equals 0 otherwise, so in fact

\displaystyle \displaystyle \zeta_\varepsilon(s) = \sum_{n \geq 1} \frac{\varepsilon(n)}{n^s} = 1

since the only nonzero term is \varepsilon(1)/1^s = 1/1 = 1. All together, then, we have

\displaystyle \zeta_\mu(s) \zeta(s) = 1

and hence

\displaystyle \displaystyle \frac{1}{\zeta(s)} = \zeta_\mu(s) = \sum_{n \geq 1} \frac{\mu(n)}{n^s}

For example, consider \zeta(2):

\displaystyle \zeta(2) = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \dots

This converges to something, although a priori it is not obvious what. By writing a simple computer program, or by asking Wolfram Alpha, we can add up, say, 1000 terms and find that it is approximately 1.6439\dots. Apparently, the reciprocal of this number is given by

\displaystyle \frac{1}{\zeta(2)} = \frac{1}{1^2} + \frac{-1}{2^2} + \frac{-1}{3^2} + \frac{0}{4^2} + \dots

where each numerator is \mu(n). Again, we can use a computer to check that this sum is approximately 0.6079\dots: and sure enough, this is approximately 1/1.6439\dots!

It turns out that \zeta(2) = \sum_{n \geq 1} 1/n^2 converges to exactly \pi^2/6 (!!!)—hopefully I can write another blog post explaining that (to be honest at the moment I don’t know how to prove it). We also know that

\displaystyle \displaystyle \zeta(4) = \sum_{n \geq 1} \frac{1}{n^4} = \frac{\pi^4}{90}

and there is actually a formula giving \zeta(2n) for any even positive integer. \zeta(3) \approx 1.2020\dots is called Apéry’s constant, since Roger Apéry proved in 1978 that it is irrational; but we don’t know of any nice formula for it.

I will leave you with a few things to prove that you may find amusing:

  • \zeta(s)^2 = \zeta_\tau(s)
  • \zeta(s)\zeta(s-1) = \zeta_\sigma(s)

Recall that \tau(n) is the number of divisors of n, and \sigma(n) is the sum of divisors of n; to prove these you will want to reference some facts we proved about \tau and \sigma in terms of Dirichlet convolution.

Next time, we’ll see another way to relate the zeta function to prime numbers.

Posted in number theory | Tagged , , , , , , , | 1 Comment

Dirichlet generating functions

Suppose f(n) is a function defined for positive integers n \geq 1. Then we can define an infinite series \zeta_f(s) as follows:

\displaystyle \begin{array}{rcl}\zeta_f(s) &=& \displaystyle \frac{f(1)}{1^s} + \frac{f(2)}{2^s} + \frac{f(3)}{3^s} + \dots + \frac{f(n)}{n^s} + \dots \\[1em] &=& \displaystyle \sum_{n \geq 1} \frac{f(n)}{n^s} \end{array}

(This might look a bit strange, but bear with me!) For example, suppose f(n) = n+2 for all n. Then

\displaystyle \zeta_f(3) = \displaystyle \frac{1+2}{1^3} + \frac{2+2}{2^3} + \frac{3+2}{3^3} + \frac{4+2}{4^3} + \dots \approx 4.0490\dots

(Note that in this case, with s = 3, the infinite sum converges; but often we just use s as a formal placeholder and don’t particularly care about convergence. That is, \zeta_f is perfectly well defined as an infinite series without worrying about the value of s.)

In fact \zeta_f is called the Dirichlet generating function for f. Of course, this name should remind you of Dirichlet convolution—and it’s no coincidence; Dirichlet convolution and Dirichlet generating functions are closely related. Let’s see how.

Suppose we have two functions f(n) and g(n), and consider multiplying their Dirichlet generating functions (with plain old, regular multiplication):

\displaystyle \zeta_f(s) \zeta_g(s) = \displaystyle \left( \sum_{n \geq 1} \frac{f(n)}{n^s} \right) \left( \sum_{n \geq 1} \frac{g(n)}{n^s} \right)

We have a big (well, infinite) sum of things multiplied by another big sum. By distributivity, we’re going to get a big sum as a result, where each term of the resulting sum is the product of one thing chosen from the left-hand sum and one thing chosen from the right-hand sum. (This is just like FOIL, only way cooler and more infinite-r.) That is, the resulting sum is going to look something like this:

\displaystyle \displaystyle \zeta_f(s) \zeta_g(s) = \dots + \frac{f(a)}{a^s} \frac{g(b)}{b^s} + \dots = \sum_{a,b \geq 1} \frac{f(a)}{a^s} \frac{g(b)}{b^s}

with one term for every possible choice of a and b. The a^s b^s in the denominator is of course equal to (ab)^s, and we can collect up all the fractions with the same denominator: for each n, we will get a denominator of n^s in each case where we pick some a and b whose product is n. So we can reorganize the terms in the above sum, grouping together all the terms where the product of a and b is the same, and rewrite it like this (is this starting to look familiar…?):

\displaystyle \displaystyle \zeta_f(s) \zeta_g(s) = \sum_{n \geq 1} \sum_{ab = n} \frac{f(a) g(b)}{n^s}

Now we can factor out the 1/n^s (since it doesn’t depend on a or b), like so:

\displaystyle \displaystyle \zeta_f(s) \zeta_g(s) = \sum_{n \geq 1} \frac{1}{n^s} \sum_{ab = n} f(a) g(b)

But the inner sum is now just the definition of the Dirichlet convolution of f and g! So the whole thing becomes

\displaystyle \displaystyle \zeta_f(s) \zeta_g(s) = \sum_{n \geq 1} \frac{1}{n^s} \sum_{ab = n} f(a) g(b) = \sum_{n \geq 1} \frac{(f \ast g)(n)}{n^s}

And finally, we note that the thing on the right is itself the Dirichlet generating function for f \ast g. So in summary, we have shown that

\displaystyle \displaystyle \zeta_f(s) \zeta_g(s) = \zeta_{f \ast g}(s)

Neato! So in some sense, Dirichlet generating functions “turn Dirichlet convolution into regular multiplication”.

Posted in number theory | Tagged , , , , , , , | 3 Comments

More fun with Dirichlet convolution

I’m back after a bit of a hiatus for the holidays! Last time we saw how the principle of Möbius inversion arises from considering the \mu function from the point of view of Dirichlet convolution. Put simply, the Möbius function \mu is the inverse of \mathbf{1} with respect to Dirichlet convolution. As an example, we noted that \mathbf{1} \ast \varphi = \mathit{id} (that is, the sum of \varphi(d) over all divisors d of n is equal to n), and hence by Möbius inversion \varphi = \mu \ast \mathit{id}.

This is far from earth-shattering, but it’s fun to see how a number-theoretic function like \varphi arises in a simple formula involving Dirichlet convolution, and how Möbius inversion allows us to quickly derive a related but non-obvious fact. Let’s do a few more!

First, let’s consider \mathbf{1} \ast \mathbf{1}. We have

\displaystyle (\mathbf{1} \ast \mathbf{1})(n) = \sum_{d \mid n} \mathbf{1}(d) \mathbf{1}(n/d) = \sum_{d \mid n} 1,

which is just counting the number of divisors of n. This function is often denoted \tau. For example, 12 has six divisors (namely, 1, 2, 3, 4, 6, and 12), so \tau(12) = 6. Likewise \tau(7) = 2 (1 and 7), \tau(4) = 3 (1, 2, and 4), and so on.

The above equation can be restated as \mathbf{1} \ast \mathbf{1} = \tau, so by Möbius inversion, we immediately conclude that \mathbf{1} = \mu \ast \tau, that is,

\displaystyle 1 = \sum_{ab = n} \mu(a) \tau(b).

For example, we can check this for n = 12:

\displaystyle \begin{array}{l} \mu(1) \tau(12) + \mu(2) \tau(6) + \mu(3) \tau(4)\\[1em] + \mu(4) \tau(3) + \mu(6) \tau(2) + \mu(12) \tau(1) \\[1em] = 6 - 4 - 3 + 0 + 2 + 0 \\[1em] = 1 \end{array}

indeed.

As another example, \mathbf{1} \ast \mathit{id} gives us \sum_{d \mid n} d, the sum of the divisors of n. This is usually denoted \sigma. For example, \sigma(12) = 1+2+3+4+6+12 = 28, \sigma(6) = 1+2+3+6 = 12, \sigma(7) = 1+7 = 8, and so on. Often we also define s(n) = \sigma(n) - n to be the sum of all the divisors of n other than n itself, i.e. the proper divisors. Perfect numbers like 6 and 28 are those for which n = s(n).

Again, since \mathbf{1} \ast \mathit{id} = \sigma, by Möbius inversion we immediately conclude \mathit{id} = \mu \ast \sigma; for example, again when n = 12, we have

\displaystyle \begin{array}{l} \mu(1) \sigma(12) + \mu(2) \sigma(6) + \mu(3) \sigma(4)\\[1em] + \mu(4) \sigma(3) + \mu(6) \sigma(2) + \mu(12) \sigma(1) \\[1em] = 28 - 12 - 7 + 0 + 3 + 0 \\[1em] = 12. \end{array}

Posted in number theory | Tagged , , , , , , | 1 Comment

Paper torus with Villarceau circles

I made another thing!

This is a torus, made from 24 crescent-shaped pieces of paper with slots cut into them so they interlock with each other. I followed these instructions on cutoutfoldup.com. There is also a template with some ideas for nice variations here.

The idea of this model is to highlight Villarceau circles. Everyone knows how to find circular cross-sections of a torus, right? You can cut it horizontally (like cutting a bagel in half) in which case you get two concentric circles:

Or you can cut it vertically (like cutting a donut in half, if you wanted to share it with someone else), in which case you get two separate circles:

But it turns out there is yet another way to cut a torus to get circular cross-sections, by cutting it with a diagonal plane:

Note that the plane is tangent to the torus at the two places where the circles intersect. These circles are called Villarceau circles, named after French mathematician Yvon Villarceau, who published a paper about them in 1848. The paper model highlights these circles: each circle is composed of edges from two differently-colored crescents; the points of the crescents come together at the tangent points where two Villarceau circles intersect.

If you want to try making this model yourself, be warned: it is quite difficult! The five-star difficulty rating is no joke. The time from when I first printed out the templates for cutting until the time I finally finished it was several months. Partly that was because I only worked off and on and often went weeks without touching it. But partly it was also because I gave up in frustration a few times. The first time I attempted to assemble all the pieces it ended up completely falling apart. But the second time went much better (using what I had learned from my first failed attempt).

Posted in geometry | Tagged , , , , , | Leave a comment