## The wizard’s rational puzzle (solutions, part 2)

At long last, here is the solution I had in mind for the Wizard’s rational puzzle. Recall that the goal is to figure out the numerator and denominator of a secret rational number, if all we are allowed to do is reciprocate, add, subtract, multiply, and compare.

• Make sure the rational is greater than $1$ (reciprocating it if necessary).
• Repeatedly subtract $1$ from it until it becomes less than $1$.
• At this point if it is equal to $0$ you can stop; otherwise, reciprocate it (making the result greater than $1$ again) and repeat the whole process.

Given the exact sequence of operations you did to reduce the rational to zero, you can easily reverse them to reconstruct the original rational. For example, suppose you had to subtract 1 three times, then reciprocate, then subtract 1 five times, reciprocate, then subtract 1 two more times. This gives us the equation

$\displaystyle\frac{1}{\frac{1}{r - 3} - 5} - 2 = 0$

or, inverting everything to solve for $r$,

$\displaystyle r = 3 + \frac{1}{5 + \frac{1}{2}} = \frac{35}{11}$.

This clearly gives the right answer; the only question is whether it will stop after a finite amount of time. But it does, since every time we subtract 1 we are making the numerator smaller without changing the denominator, and reciprocating just switches the numerator and denominator without making them bigger. Also, if we had a number less than 1 and reciprocate it, the result will definitely be bigger than 1, at which point we can subtract 1 from it at least once, and hence we cannot get stuck reciprocating repeatedly without doing any subtracting. Since we have two positive integers which are either staying the same or getting smaller on each step, the process must eventually stop.

There are several equivalent ways to think about what is going on here:

• One way to think about this—made clear by the expression for $r$ in the example above—is that we are building up the continued fraction expansion for $r$.

• Another way to think about it is that we are running the Euclidean Algorithm on $r$’s numerator and denominator. Of course the Euclidean Algorithm is typically used to find the greatest common divisor of two numbers, and if we imagine $r$ to be in lowest terms then of course the gcd of its numerator and denominator will be 1, so finding the gcd doesn’t tell us anything in and of itself. The point is, however, that the exact sequence of steps taken by the Euclidean Algorithm is unique for each relatively prime pair $(p,q)$, and so we can use the sequence of steps to reverse-engineer what $p$ and $q$ must have been in the first place.

• Yet another way, closely related to the previous, is to imagine that we are finding our way up the Calkin-Wilf tree step by step. Recall that the Calkin-Wilf tree is the infinite binary tree of positive rationals where the root is $1$ and each node $a/b$ has $a/(a+b)$ as its left child and $(a+b)/b$ as its right child. Each rational appears at a unique location in the tree, so the sequence of upwards steps taken from $r$ to the root allows us to reconstruct the original $r$.

More specifically, if we have a number bigger than $1$, it must be of the form $(a+b)/b$ for some $a$ and $b$, i.e. it is a right child in the Calkin-Wilf tree. Subtracting one from $(a+b)/b$ yields $a/b$, so it corresponds to taking one step up and to the left in the Calkin-Wilf tree. When we reach a number less than $1$, it means that node is a left child, so we cannot take another step up and to the left. Reciprocating corresponds to mirroring the entire tree left-right, and then we can continue taking steps up and to the left.

So how long does this take? Moving up one level in the Calkin-Wilf tree takes at worst $4$ operations: one subtraction, two comparisons, and a reciprocate. (We have to do two comparisons because when we find out that a number is less than one, we have to also check if it is greater than 0 to see whether we should reciprocate or stop.) On the other hand, the best case takes only two operations (a subtraction and a comparison, if the result is still greater than one). The worst case for a given depth in the C-W tree would be if we start with the ratio of two consecutive Fibonacci numbers, since we would have to reciprocate after doing only one subtraction at every single step (try it!).

There’s also another fun thing we can do to speed things up. Instead of just subtracting $1$ repeatedly, we can use a sort of binary search instead. That is, we can first create cubes containing powers of two (in practice we can just create them on the fly as needed). Then given a number $r > 1$, we first find the smallest power of two such that $r - 2^k$ is less $1$ (by computing $r - 1$, then $r - 2$, then $r - 4$, then $r - 8$, … and checking each time until we find the first $2^k$ such that $r - 2^k < 1$), and then we do a binary search on the interval $[r - 2^k, r - 2^{k-1}]$ to find the smallest $m$ such that $r - m < 1$ (in practice this just means adding or subtracting the next smaller power of two at each step, depending on whether the previous step was less than or greater than 1). Not counting the operations needed to create the cubes with the powers of 2 in the first place (since we can reuse them many times, and in any case it takes only one operation per power of two), this would take about $2 \log_2 r$ addition and subtraction operations. One might worry that this would be slightly slower for small values of $n$; however, I think (but have not fully worked out the details) that this will actually never require more operations than the naive subtraction method; I will leave this as an exercise. Of course, for larger $r$ this is clearly going to be a big win, since $2 \log_2 r$ is much smaller than $r$.

Of course, if the wizard had provided a machine that could perform a “floor” operation, we could make this even more efficient: instead of subtracting $1$ until finding a result less than $1$, we could just compute $r - \lfloor r \rfloor$. This is like being able to jump as far up and to the left as possible in the Calkin-Wilf tree using only two operations. (Unsurprisingly, the floor function plays a key role in the algorithm for generating the Calkin-Wilf sequence.) I actually had this in the original version of the puzzle, but took it out when I realized that it was not necessary, and slightly more interesting to do without!

Several commenters mentioned using the Stern-Brocot tree to search for the secret rational. It’s probably a topic for another blog post, but briefly, the idea is to keep track of four integers $p_1$, $q_1$, $p_2$, and $q_2$ representing the rational numbers $p_1/q_1$ and $p_2/q_2$. We start with $p_1 = 0$ and $q_1 = 1$ (representing $0/1 = 0$) and $p_2 = 1$, $q_2 = 0$ ($1/0$, representing “infinity”). We maintain the invariant that $p_1/q_1 < r < p_2/q_2$, that is, we maintain $p_1/q_1$ and $p_2/q_2$ as the endpoints of an interval that contains the secret rational $r$. At each step we compute the mediant $a/b = (p_1 + p_2)/(q_1 + q_2)$, which is guaranteed to lie in between $p_1/q_1$ and $p_2/q_2$ (exercise: prove this!), and check whether it is equal to the secret rational. If not, we either set $p_1/q_1 = a/b$ or $p_2/q_2 = a/b$, depending on whether the secret rational is greater or less than $a/b$, respectively. Unlike a simple binary search (which can only find rationals with a denominator that is a power of two in finite time), this is guaranteed to terminate in a finite amount of time; every rational can be obtained after a finite number of successive steps of taking the mediant.

So how long does it take? It turns out that the Stern-Brocot tree and the Calkin-Wilf tree have all the same rationals on each level, but in a different order1, so the two methods are easy to compare. The proposed Stern-Brocot method moves down the tree one level each step, and it needs four five operations for each step (two additions to form the mediant, one division to turn the pair of integers representing the mediant into a rational number, and then two comparisons to find out whether the mediant is less than, equal to, or greater than the target number). Unless there is a more clever way to do this that I missed, it seems my method is a clear win: for a rational on level $k$ of the Stern-Brocot or Calkin-Wilf trees, the iterated mediant algorithm always needs exactly $5k$ operations, whereas for my algorithm $4k$ is only a worst case (for a ratio of consecutive Fibonacci numbers), but in many cases it does better (much better for rationals with large entries in their continued fraction expansion, which we can skip past in logarithmic time).

1. Each is a bit-reversal permutation of the other.