My last post proposed a claim:
The binary algorithm is the most efficient way to build using only doubling and incrementing steps. That is, any other way to build
by doubling and incrementing uses an equal or greater number of steps than the binary algorithm.
Someone posted a very nice, relatively short proof in the comments, which was quite different from the proof I had in mind. Maybe I’ll write about it in another post, but for now you can go read it for yourself.
In this post I’d like to present the proof I had in mind, which has a more constructive/computational flavor. Let’s use the digit to represent an increment step, and the digit
to represent a doubling step. We can use a sequence of these digits to compactly represent a sequence of steps. Each sequence
of
’s and
’s thus corresponds to a natural number, namely, the one we get if we start from zero and execute all the steps from left to right. Let’s denote this number by
. So, for example,
, since starting from
, we first increment (yielding 1), then double twice (yielding 4), increment twice (6), double (12), and finally increment (13). Also, denote the length of
by
. The length of
is the same as the number of steps it represents. For example,
.
Now, it turns out that is not the most efficient way to generate
. In looking at this and other examples of such non-optimal sequences, what stuck out to me is that they all seemed to contain consecutive increment operations. For example, in the case of
, consecutive
’s are used in the middle to go from
to
, after doubling
to get
. But if we had just incremented once first (going from
to
), then doubling would get us to
right away; the effect of the one increment is “magnified” by the subsequent doubling. Formally, we can say that
(a double followed by two increments) can always be replaced by
(an increment followed by a double), because
.
What if we keep doing this operation—replacing with
—as much as we can? It turns out that by doing this we can always transform any sequence into an equivalent one the same length or shorter which has no consecutive
’s. This is the idea behind the first lemma.
Lemma 1. Given any sequence , there exists a sequence
such that
,
, and
has no consecutive
’s.
In other words, for any sequence we can always find an equivalent, shorter (or at least not longer) sequence with no consecutive ’s.
Proof. By induction on the number of symbols in
.
- Base case: if
has no consecutive
’s, then
satisfies the conditions of the lemma, and certainly an
with zero
’s has no consecutive
’s.
-
Let
and suppose we know that for any
with exactly
’s there exists an equivalent
with the stated properties. Now suppose we have a sequence
with exactly
’s. If none are adjacent then we are done, since
itself has the required properties. Otherwise, consider the leftmost pair of consecutive
’s. There are two cases to consider:
-
If
begins with two
’s,
this means that the procedure for computing
starts by incrementing twice from
to reach
. Let
be the sequence obtained from
by replacing the initial
with
. An increment followed by a double also yields
, and the rest of
is identical to
, so
. But
has one fewer
than
, so the induction hypothesis tells us there must be some equivalent
no longer than
with no consecutive
’s. This
is the one we are looking for; we need only observe that
and
.
-
Otherwise, the leftmost pair of
’s must occur immdiately following a
. In that case, as argued before, we can replace
by
, which yields an equivalent but shorter sequence and reduces the number of
’s by one. Again, the induction hypothesis now implies that we can find an equivalent sequence with no repeated
’s which is no longer.
-
Let’s look at an example. Suppose we start with the most inefficient way possible to represent , namely,
consecutive increment steps. Then the lemma above says we can find an equivalent, shorter sequence with no consecutive
’s. Moreover, the proof is actually constructive, that is, it does not merely assert that such a sequence exists, but gives us a concrete way to compute it. The proof outlines a recursive rewriting process which we can visualize as follows:
The red underline shows the part that is going to be rewritten at each step. Notice how the length either stays the same or decreases at each step, and how the number of ’s decreases at each step. In fact, either a
gets deleted and the number of
’s stays the same, or a
is replaced by a
when there are two
’s at the leftmost position. This is the only way new
’s are generated. So
’s are “born” at the left end from a pair of
’s, and then they spend the rest of their life slowly “migrating” to the right.
Let’s try starting with a different representation of , say,
:
Curiously, it seems that the process ends with the same sequence () even though we started with a different sequence that did not occur anywhere during the process generated by starting with thirteen
’s. Could this just be a coincidence?
Well, the fact that I’m even asking the question kind of gives away the answer: it’s no coincidence. In fact, for a given , if you start with any sequence representing
and run this process, you will always end with the same sequence at the end. And not only will this particular process always yield the same sequence, but there is only one possible sequence it could yield:
Lemma 2. For every natural number , there is a unique sequence
such that
has no consecutive
’s and
.
Proof. By (strong) induction on .
-
In the base case (
), there is only one sequence which represents
at all, namely, the empty sequence (and it has no consecutive
’s).
-
Now pick some
and assume that consecutive-
-less representations are unique for all
. Suppose
and
are two sequences with no consecutive
’s such that
. Our goal is to show that in fact
.
-
If
and
both end with the same symbol, then removing it yields two consecutive-
-less sequences representing some
(either
or
depending on the symbol removed); by the induction hypothesis they must be the same and hence
as well.
-
Otherwise, suppose without loss of generality that
ends with
and
ends with
; we will show that this case is actually impossible.
must be even, since it ends with a doubling step. If
ends with a doubling step followed by an increment step (or if it consists solely of an increment), then it would be odd, but this is impossible since
and
is even. Hence
must end in consecutive
’s; but this is also impossible since we assumed
had no consecutive
’s.
-
Finally, putting the pieces together: notice that the sequence generated from the binary representation of has no consecutive
’s, since each bit always generates a
, which is optionally followed by a
. Since by Lemma 2 we now know that such representations are unique, the process explained in the proof of Lemma 1 must actually result in this same unique sequence corresponding to the binary representation of
. Since this process results in a sequence which is no longer than the starting sequence, this means that every sequence of steps representing
must be at least as long as the binary sequence. Hence, the binary sequence is the most efficient!
Pingback: Efficiency of repeated squaring: another proof | The Math Less Traveled
Pingback: Primality testing: recap | The Math Less Traveled