Post without words #28

Image | Posted on by | Tagged , , | 12 Comments

Book review: Opt Art

[Disclosure of Material Connection: Princeton Press kindly provided me with a free review copy of this book. I was not required to write a positive review. The opinions expressed are my own.]

Opt Art: From Mathematical Optimization to Visual Design
Robert Bosch
Princeton University Press, 2019

I recently finished reading Robert Bosch’s new book, Opt Art. It was a quick read, both because it’s not actually that long, but also because it was fascinating and beautiful and I didn’t want to put it down!

The central theme of the book is using linear optimization (aka “linear programming”) to design and generate art. The resulting art can be simply beautiful for its own sake, or can also give us insight into underlying mathematics.

Linear optimization is something I knew about in a general sense, but after reading Bosch’s book I understand it much better—both the details of how the simplex algorithm works, and especially the various ways linear optimization can be applied. I think Bosch does a fantastic job explaining things in a way that gives real insight but doesn’t get bogged down in too much detail. (In a few places I personally wish there had been a few more details—but it’s quite possible that adding more detail would have made the book better for me but worse for a bunch of other people, i.e. it would not be a global optimum!)

A Celtic knot pattern created out of a single continuous TSP tour

Another thing the book explains really well is how the Travelling Salesman Problem (TSP) can be solved using linear optimization. I had no idea there was a connection between the two topics. I’m sure the connection is explained in great detail in the TSP book by William Cook, which I read 7 years ago, but for some reason when I read that I guess it didn’t really click. But from reading Bosch’s book I feel like I now know enough to put together the details and implement a basic TSP solver myself if I wanted to (maybe I will)!

I’m definitely inspired to use some of Bosch’s techniques to make my own artwork—if I do, I will obviously post about it here!

Posted in books, review | Tagged , , , | Leave a comment

More on Human Randomness

In a post a few months ago I asked whether there is a way for a human to reliably generate truly random numbers. I got a lot of great responses and I think it’s worth summarizing them here!

Randomness in poker strategies

Robert Anderson noted that poker players sometimes use the second hand of a watch to introduce some randomness into their strategy. I assumed this would be something like getting a random bit based on whether the number of seconds is even or odd, but Pete McAllister chimed in to say that it is usually something more like dividing a minute into chunks, and making a decision based on which chunk the current second is in. For example, if you want to make one choice 20 percent of the time and another choice 80 percent of the time, you could just make the first choice if the second hand is between 0–12 seconds, and the other choice otherwise.

In game theory this is called a “mixed” strategy, and this kind of strategy can arise naturally as the Nash equilibrium of certain kinds of games, so it’s not surprising to me that it would show up in high-level poker. I found conflicting advice about this online; some people were claiming that you should not use randomness when playing poker, but I did find a website that talked about implementing this kind of mixed strategy using the second hand of a watch, and it seemed to be a website with pretty high-level poker advice.

In any case, if you have a phone or a watch with you, this does suggest some strategies for generating random numbers: for example, look at the last digit of the seconds to get a random number from 0–9, or whether it is even or odd to get a bit. Or you could just take the number of seconds directly as a random number between 0–59. Of course this only works once and then you have to wait a while before you can do it again. Also, it turns out that my phone doesn’t show seconds by default. Taking the ones digit of the minutes as a random number from 0–9 should work too, but the tens digit of the minutes seems like it’s “not random enough”, in the sense that it might be correlated with whatever it is that I’m doing.

Of course, a phone or watch counts as an “aid”, but most people tend to carry around something like this all the time, so it’s relatively practical. On the other hand, if you’re going to use a phone anyway, you should just use an app for generating random numbers.

Bodies and clothing

  • Naren Sundar commented that hair is pretty random, but admitted that it would be hard to measure.

  • Frederik suggested spitting, or throwing your shoe in the air and seeing which way the toe points when it lands. I like the shoe idea, but on the other hand it’s somewhat obtrusive to take your shoe off and throw it in the air every time you want a random bit! And what if you’re not wearing shoes? I’m also afraid I might throw my shoe in the same way every time; I’m not sure how random it would be in practice.

Minds and memorization

Kaligule suggested taking whatever song is currently running through your head, stopping it at a random point, and getting a random bit by seeing whether the number of consonants in the next word is even or odd.

This is a cool idea, and is the only proposal that really meets my criterion of generating randomness “without aids”. I think for some people it could work quite well. “Stopping at a random point” is somewhat problematic—you might be biased to stop at certain points more than others—but it’s pretty hard to know how many consonants are in a word before you count, so I’m not sure this would really bias the results that much.

Unfortunately, however, it won’t work for me because, although I do always have some kind of music running through my head, it often has no lyrics! Kaligule suggested using whether the melody goes up or down, but this is obvious (unlike number of consonants in a word) and too easy to “cheat”, i.e. pick a stopping point that gives me the bit I “want”.

This suggested another idea to me, however: just pre-generate some random data and put some up-front effort into memorizing it. Whenever you need some randomness, use the next part of the random sequence you memorized. When you use it up, generate another and memorize that instead. This leaves a number of questions:

  • How do you reliably keep track of where you are in the sequence? I don’t actually have a good answer to this. I think in practice I would get confused and forget whether I had already used a certain part or not. Though maybe this doesn’t really matter that much.

  • What format would be most effective, and how do you go about memorizing it? Some ideas:

    • My first idea is to generate a sequence of random bits, and then write a story where sequential words have even or odd numbers of letters corresponding to the bits in your sequence. Unfortunately, this seems like a relatively inefficient way to memorize data, but writing a story that corresponds to a given sequence of bits does sound like a fun exercise in constrained writing.

    • Alternatively, one could simply generate a random sequence of digits (or hexadecimal digits) and memorize them using whatever sort of memorization technique you like (e.g. a memory palace). This is less fun but probably more effective. Memorizing a story sounds like it would be easier, but I don’t think it is, especially since you would have to memorize it word-for-word and you only get one bit per word memorized, as opposed to something like e.g. four bits per hexadecimal digit.

I have generated some random hexadecimal digits but haven’t gotten around to trying to memorize them yet. If I do I will definitely report on the experience. In the meantime, I’m also open to more ideas!

Posted in computation, people | Tagged , , , | 2 Comments

Book review: The Mathematics of Various Entertaining Subjects, Volume 3

I have a bunch of books in the queue to review—I hope to begin writing these more regularly again!

[Disclosure of Material Connection: Princeton Press kindly provided me with a free review copy of this book. I was not required to write a positive review. The opinions expressed are my own.]

The Mathematics of Various Entertaining Subjects, Volume 3: The Magic of Mathematics
Jennifer Beineke and Jason Rosenhouse, eds.
Princeton University Press, 2019

The MOVES conference takes place every two years in New York. MOVES is an acronym for “The Mathematics of Various Entertaining Subjects”, and the conference is a celebration of math that isn’t necessarily considered an Important Research Topic, and doesn’t necessarily have Important Applications—but simply math that is fun for its own sake. (Although in hindsight, math that starts out as Just For Fun often seems to end up with important applications too—for example, think of graph theory or probability theory.) The most recent conference took place just a few months ago, in August 2019; the next one will be in August 2021 (you can already register if you like to plan that far ahead!).

This book is basically the conference proceedings from 2017—a collection of papers that were presented at the conference, published all together in book form. So it’s important to state at the outset that although the topics are entertaining, this really is a collection of research papers. Overall this is definitely not a book written for a general audience! I had to work hard to understand some of the papers, and some of them lost me completely.

However, there’s some great stuff in here that rewards patient study. Some of my favorites that are more generally accessible include:

  • A chapter on “Wiggly Games and Burnside’s Lemma” that does a great job explaining Burnside’s Lemma—a classic result about counting things with symmetry, at the intersection of combinatorics and group theory—via applications to counting the number of possible tiles in several different games.

  • “Solving Puzzles Backwards” has some nice puzzles and a discussion of elegant ways to approach their solutions.

  • “Should we Call Them Flexa-Bands?” has some interesting reflections on the topology of different types of flexagons.

Some other things I particularly enjoyed but which are not so accessible without some background include a chapter on the computational complexity of losing at checkers, a chapter on “Kings, sages, hats, and codes” that I wish I understood better, and a chapter on the combinatorics of Legos.

There’s so much other stuff in there on such wildly varying topics that it’s impossible to summarize. In any case, definitely recommended if you are a professional mathematician looking for some fun yet still technically meaty reading; definitely not recommended if you’re looking for a casual read of a popular math book. And if you’re somewhere in between—that is, you’re not a professional mathematician but you aspire to read and understand things on that level—this could honestly be a great place to start!

Posted in books, review | Tagged , , , , | Leave a comment

A combinatorial proof: PIE a la mode!

Continuing from my last post in this series, we’re trying to show that S = n!, where S is defined as

\displaystyle S = \sum_{i=0}^n (-1)^i \binom{n}{i} (k+(n-i))^n

which is what we get when we start with a sequence of n+1 consecutive nth powers and repeatedly take successive differences.

Recall that we defined M_a as the set of all functions from a set of size n (visualized as n blue dots) to a set of size k + n (visualized as k yellow dots on top of n blue dots) such that the blue dot numbered a is missing. I also explained in my previous post that the functions with at least one blue dot missing from the output are exactly the “bad” functions, that is, the functions which do not correspond to a one-to-one matching between the blue dots on the left and the blue dots on the right.

As an example, the function pictured above is an element of M_1, as well as an element of M_3. (That means it’s also an element of the intersection M_1 \cap M_3—this will be important later!)

Let F be the set of all functions from n to k+n, and let P be the set of “good” functions, that is, the subset of F consisting of matchings (aka Permutations—I couldn’t use M for Matchings because M is already taken!) between the blue sets. We already know that the number of matchings between two sets of size n, that is, |P|, is equal to n!. However, let’s see if we can count them a different way.

Every function is either “good” or “bad”, so we can describe the set of good functions as what’s left over when we remove all the bad ones:

\displaystyle P = F - \bigcup_{a=1}^n M_a

(Notice how we can’t just write P = F - M_1 - M_2 - \dots - M_n, because the M_a sets overlap! But if we union all of them we’ll get each “bad” function once.)

In other words, we want to count the functions that aren’t in any of the M_a. But this is exactly what the Principle of Inclusion-Exclusion (PIE) is for! PIE tells us that the size of this set is

\displaystyle |P| = |F| - \left|\bigcup_{a=1}^n M_a \right| = \sum_{T \subseteq \{1 \dots n\}} (-1)^{|T|}\left| \bigcap_{a \in T} M_a \right|,

that is, we take all possible intersections of some of the M_a, and either add or subtract the size of each intersection depending on whether the number of sets being intersected is even or odd.

We’re getting close! To simplify this more we’ll need to figure out what those intersections \bigcap_{a \in T} M_a look like.

Intersections

What does M_a \cap M_b look like? The members of M_a \cap M_b are exactly those functions which are in both M_a and M_b, so M_a \cap M_b contains all the functions that are missing both a and b (and possibly other elements). Likewise, M_a \cap M_b \cap M_c contains all the functions that are missing (at least) a, b, and c; and so on.

Last time we argued that |M_a| = (k + (n-1))^n, since functions from n to k+n that are missing a can be put into a 1-1 matching with arbitrary functions from n to k + (n-1), just by deleting or inserting element a:

So what about an intersection—how big is M_a \cap M_b (assuming a \neq b)? By a similar argument, it must be (k + (n-2))^n, since we can match up each function in M_a \cap M_b with a function from n to k+(n-2): just delete or insert both elements a and b, like this:

Generalizing, if we have a subset T \subseteq \{1, \dots, n\} and intersect all the M_a for a \in T, we get the set of functions whose output is missing all the elements of T, and we can match them up with functions from n to k + (n-|T|). In formal notation,

\displaystyle \left| \bigcap_{a \in T} M_a \right| = (k + (n-|T|))^n

Substituting this into the previous expression for the number of blue matchings |P|, we get

\displaystyle |P| = \sum_{T \subseteq \{1 \dots n\}} (-1)^{|T|}(k + (n-|T|))^n

Counting subsets

Notice that the value of (-1)^{|T|}(k + (n-|T|))^n depends only on the size of the subset T and not on its specific elements. This makes sense: the number of functions missing some particular number of elements is the same no matter which specific elements we pick to be missing.

So for each particular size |T| = i, we are adding up a bunch of copies of the same value (-1)^i (k + (n-i))^n—as many copies as there are different subsets of size i. The number of subsets T of size i is \binom n i, the number of ways of choosing exactly i things out of n. Therefore, if we add things up size by size instead of subset by subset, we get

\begin{array}{rcl} |P| &=& \displaystyle \sum_{\text{all possible sizes } i} (\text{number of subsets of size } i) \cdot \left[(-1)^{i}(k + (n-i))^n \right] \\[1em] &=& \displaystyle \sum_{i=0}^n \binom n i (-1)^{i}(k + (n-i))^n\end{array}

But this is exactly the expression for S that we came up with earlier! And since we already know |P| = n! this means that S = n! too.

And that’s essentially it for the proof! I think there’s still more to say about the big picture, though. In a future post I’ll wrap things up and offer some reflections on why I find this interesting and where else it might lead.

Posted in arithmetic, combinatorics, proof | Tagged , , , , , | Leave a comment

A few words about PWW #27

The images in my last post were particular realizations of the famous Sieve of Eratosthenes. The basic idea of the sieve is to repeatedly do the following:

  • Circle the next number bigger than 1 that is not yet crossed out, call it p.
  • Cross out every pth number after p, that is, all the multiples of p. These are not prime since they are multiples of p.

This is an efficient way to find all the primes up to a given limit. Note that it doesn’t require doing any division or factoring, just adding. Here’s the image of the 10 \times 10 sieve again:

Some questions for you to ponder:

  • Why can we always cross out multiples of each prime p using parallel straight lines?
  • When will the lines be vertical? When will they be diagonal?
  • Is there only one way to cross out multiples of each p with straight lines, or could we have chosen different lines?
  • Why are the primes on the top row circled, while the rest of the primes are in boxes? What’s the difference?

And just for fun here’s the sieve diagram for n = 29, one of my favorites. Click here for a larger version.

Posted in pattern, pictures, posts without words, primes | Tagged , , | 4 Comments

Post without words #27

Image | Posted on by | Tagged , , | 7 Comments