\section{Ackermann's function and its inverse}\id{ackersec}%
-The Ackermann's function is an~extremely quickly growing function that has been
-originally introduced by Ackermann \cite{ackermann:function} in the context of
-computability theory. Its original purpose was to demonstrate that a~function
-can be recursive, but not primitive recursive. At the first sight, it does not
+The Ackermann's function is an~extremely quickly growing function which has been
+introduced by Ackermann \cite{ackermann:function} in the context of
+computability theory. Its original purpose was to demonstrate that not every recursive
+function is also primitive recursive. At the first sight, it does not
seem related to efficient algorithms at all. Its various inverses however occur in
-analysis of various algorithms and mathematical structures surprisingly often:
+analyses of various algorithms and mathematical structures surprisingly often:
We meet them in Section \ref{classalg} in the time complexity of the Disjoint Set Union
data structure and also in the best known upper bound on the decision tree
complexity of minimum spanning trees in Section \ref{optalgsect}. Another
-important application is the complexity of Davenport-Schinzel sequences (see
-Klazar's survey \cite{klazar:gdss}), but as far as we know, there are not otherwise
+important application is in the complexity of Davenport-Schinzel sequences (see
+Klazar's survey \cite{klazar:gdss}), but as far as we know, these are not otherwise
related to the topic of our study.
-Various sources tend to differ in the exact definition of both the Ackermann's
-function and its inverse, but most of the definitions differ in factors that
-are negligible when compared with the asymptotic growth of the function.
-We will use the definition by double induction given by Tarjan \cite{tarjan:setunion},
+Various sources differ in the exact definition of both the Ackermann's
+function and its inverse, but most of the differences are in factors that
+are negligible in the light of the giant asymptotic growth of the function.
+We will use the definition by double recursion given by Tarjan \cite{tarjan:setunion},
which is predominant in the literature on graph algorithms:
\defn\id{ackerdef}%
-The \df{Ackermann's function} $A(x,y)$ is a~function on non-negative integers defined as:
+The \df{Ackermann's function} $A(x,y)$ is a~function on non-negative integers defined as follows:
$$\eqalign{
A(0,y) &:= 2y, \cr
A(x,0) &:= 0, \cr
A(x,1) &:= 2 \quad \hbox{for $x\ge 1$}, \cr
A(x,y) &:= A(x-1, A(x,y-1)) \quad \hbox{for $x\ge 1$, $y\ge 2$}. \cr
}$$
+The functions $A(x,\cdot)$ are called the \df{rows} of $A(x,y)$, similarly $A(\cdot,y)$ are
+its \df{columns.}
+
Sometimes, a~single-parameter version of this function is also used. It is defined
as the diagonal of the previous function, i.e., $A(x):=A(x,x)$.
$$\eqalign{
A(x,2) &= A(x-1, A(x,1)) = A(x-1,2) = A(0,2) = 4, \cr
A(1,y) &= A(0, A(1,y-1)) = 2A(1,y-1) = 2^{y-1}A(1,1) = 2^y, \cr
-A(2,y) &= A(1, A(2,y-1)) = 2^{A(2,y-1)} = 2\tower y. \cr
-A(3,y) &= \hbox{the tower function iterated $y$~times \dots} \cr
+A(2,y) &= A(1, A(2,y-1)) = 2^{A(2,y-1)} = 2\tower y \hbox{~~(the tower of exponentials),} \cr
+A(3,y) &= \hbox{the tower function iterated $y$~times,} \cr
A(4,3) &= A(3,A(4,2)) = A(3,4) = A(2,A(3,3)) = A(2,A(2,A(3,2))) = \cr
&= A(2,A(2,4)) = 2\tower(2\tower 4) = 2\tower 65536. \cr
}$$
Three functions related to the inverse of the function~$A$ are usually considered:
\defn\id{ackerinv}%
-The \df{row inverse} $a(x,y)$ of the Ackermann's function is defined as:
+The \df{row inverse} $a(x,y)$ of the Ackermann's function is defined by:
$$
a(x,n) := \min\{ y \mid A(x,y) > \log n \}.
$$
-The \df{diagonal inverse} $a(n)$ is defined as:
+The \df{diagonal inverse} $a(n)$ is defined by:
$$
-a(n) := \min\{ x \mid A(x,x) > \log n \}.
+a(n) := \min\{ x \mid A(x) > \log n \}.
$$
-The \df{alpha function} $\alpha(m,n)$ is defined for $m\ge n$ as:
+The \df{alpha function} $\alpha(m,n)$ is defined for $m\ge n$ by:
$$
\alpha(m,n) := \min\{ x\ge 1 \mid A(x,4\lceil m/n\rceil) > \log n \}.
$$
and $a(n)$ is asymptotically smaller than $a(x,n)$ for any fixed~$x$.
\obs
-The rows of $A(x,y)$ are non-decreasing and so are the columns, so $\alpha(m,n)$
-is maximized when $m=n$. Thus $\alpha(m,n) \le 3$ whenever $\log n < A(3,4)$,
-which happens for all ``practical'' values of~$m$.
+It is easy to verify that all the rows are strictly increasing and so are all
+columns, except the first three columns which are constant. Therefore for a~fixed~$n$,
+$\alpha(m,n)$ is maximized at $m=n$. So $\alpha(m,n) \le 3$ when $\log n < A(3,4)$,
+which covers all values of~$m$ that are likely to occur in practice.
\lemma
$\alpha(m,n) \le a(n)+1$.
\proof
$A(x,4\lceil m/n\rceil) \ge A(x,4) = A(x-1,A(x,3)) \ge A(x-1,x-1)$, so $A(x,4\lceil m/n\rceil)$
-rises above $\log n$ no later than $A(x-1,x-1)$ does so.
+rises above $\log n$ no later than $A(x-1,x-1)$ does.
\qed
\lemma\id{alphaconst}%