\n{$G/e$}{multigraph contraction \[contract]}
\n{$G.e$}{simple graph contraction \[simpcont]}
\n{$G/X$, $G.X$}{contraction by a~set $X$ of vertices or edges \[setcont]}
-\n{$f[X]$}{function applied to a set: $f[X]:=\{ f(x) ; x\in X \}$}
+\n{$f[X]$}{function applied to a set: $f[X]:=\{ f(x) \mid x\in X \}$}
\n{$f[e]$}{as edges are two-element sets, $f[e]$ maps both endpoints of an edge~$e$}
\n{$\varrho({\cal C})$}{edge density of a graph class~$\cal C$ \[density]}
\n{$\deg_G(v)$}{degree of vertex~$v$ in graph~$G$; we omit $G$ if it is clear from context}
\n{$\log n$}{a binary logarithm of the number~$n$}
\n{$f^{(i)}$}{function~$f$ iterated $i$~times: $f^{(0)}(x):=x$, $f^{(i+1)}(x):=f(f^{(i)}(x))$}
\n{$2\tower n$}{the tower function (iterated exponential): $2\tower 0:=1$, $2\tower (n+1):=2^{2\tower n}$}
-\n{$\log^* n$}{the iterated logarithm: $\log^*n := \min\{i: \log^{(i)}n \le 1\}$; the inverse of~$2\tower n$}
-\n{$\beta(m,n)$}{$\beta(m,n) := \min\{i: \log^{(i)}n \le m/n \}$ \[itjarthm]}
+\n{$\log^* n$}{the iterated logarithm: $\log^*n := \min\{i \mid \log^{(i)}n \le 1\}$; the inverse of~$2\tower n$}
+\n{$\beta(m,n)$}{$\beta(m,n) := \min\{i \mid \log^{(i)}n \le m/n \}$ \[itjarthm]}
\n{$W$}{word size of the RAM \[wordsize]}
\n{$\(x)$}{number~$x\in{\bb N}$ written in binary \[bitnota]}
\n{$\(x)_b$}{$\(x)$ zero-padded to exactly $b$ bits \[bitnota]}
\n{$D(m,n)$}{Decision tree complexity of MSF \[decdef]}
\n{$A(x,y)$}{The Ackermann's function \[ackerdef]}
\n{$A(x)$}{The diagonal Ackermann's function \[ackerdef]}
-\n{$a(x,n)$}{The inverse of the $x$-th row of the Ackermann's function \[ackerinv]}
-\n{$a(n)$}{The diagonal inverse of the Ackermann's function \[ackerinv]}
+\n{$\lambda_i(n)$}{The inverse of the $i$-th row of the Ackermann's function \[ackerinv]}
+\n{$\alpha(n)$}{The diagonal inverse of the Ackermann's function \[ackerinv]}
\n{$\alpha(m,n)$}{$\alpha(m,n) := \min\{ x\ge 1 \mid A(x,4\lceil m/n\rceil) > \log n \}$ \[ackerinv]}
}
$$\eqalign{
V' &= (V(G) \setminus \{x,y\}) \cup \{v_e\},\quad\hbox{where $v_e$ is a new vertex,}\cr
E' &= E(G) - \{e\},\cr
-M'(f) &= \{ m(v) ; v\in M(f) \} \quad\hbox{for every $f=\in E'$, and}\cr
+M'(f) &= \{ m(v) \mid v\in M(f) \} \quad\hbox{for every $f=\in E'$, and}\cr
m(x) &= \cases{v_e & \hbox{for $v=x,y,$}\cr v & \hbox{otherwise.}} \cr
}$$
produces a graph $G.e=(V',E')$ such that:
$$\eqalign{
V' &= (V(G) \setminus \{x,y\}) \cup \{v_e\},\quad\hbox{where $v_e$ is a new vertex,}\cr
-E' &= \{ \{m(x),m(y)\} ; xy\in E \land m(x)\ne m(y) \},\cr
+E' &= \{ \{m(x),m(y)\} \mid xy\in E \land m(x)\ne m(y) \},\cr
m(x) &= \cases{v_e & \hbox{for $v=x,y,$}\cr v & \hbox{otherwise.}} \cr
}$$
%--------------------------------------------------------------------------------
-\section{Ackermann's function and its inverse}\id{ackersec}%
+\section{Ackermann's function and its inverses}\id{ackersec}%
The Ackermann's function is an~extremely quickly growing function which has been
introduced by Ackermann \cite{ackermann:function} in the context of
Various sources differ in the exact definition of both the Ackermann's
function and its inverse, but most of the differences are in factors that
-are negligible in the light of the giant asymptotic growth of the function.
+are negligible in the light of the giant asymptotic growth of the function.\foot{%
+To quote Pettie \cite{pettie:onlineverify}: ``In the field of algorithms \& complexity,
+Ackermann's function is rarely defined the same way twice. We would not presume to buck
+such a~well-established precedent. Here is a~slight variant.''}
We will use the definition by double recursion given by Tarjan \cite{tarjan:setunion},
which is predominant in the literature on graph algorithms:
Three functions related to the inverse of the function~$A$ are usually considered:
\defn\id{ackerinv}%
-The \df{row inverse} $a(x,y)$ of the Ackermann's function is defined by:
+The \df{$i$-th row inverse} $\lambda_i(n)$ of the Ackermann's function is defined by:
$$
-a(x,n) := \min\{ y \mid A(x,y) > \log n \}.
+\lambda_i(n) := \min\{ y \mid A(i,y) > \log n \}.
$$
-The \df{diagonal inverse} $a(n)$ is defined by:
+The \df{diagonal inverse} $\alpha(n)$ is defined by:
$$
-a(n) := \min\{ x \mid A(x) > \log n \}.
+\alpha(n) := \min\{ x \mid A(x) > \log n \}.
$$
-The \df{alpha function} $\alpha(m,n)$ is defined for $m\ge n$ by:
+The two-parameter \df{alpha function} $\alpha(m,n)$ is defined for $m\ge n$ by:
$$
\alpha(m,n) := \min\{ x\ge 1 \mid A(x,4\lceil m/n\rceil) > \log n \}.
$$
\example
-$a(1,n) = \O(\log\log n)$, $a(2,n) = \O(\log^* n)$, $a(3,n)$ grows even slower
-and $a(n)$ is asymptotically smaller than $a(x,n)$ for any fixed~$x$.
+$\lambda_1(n) = \O(\log\log n)$, $\lambda_2(n) = \O(\log^* n)$, $\lambda_3(n)$ grows even slower
+and $\alpha(n)$ is asymptotically smaller than $\lambda_i(n)$ for any fixed~$i$.
\obs
It is easy to verify that all the rows are strictly increasing and so are all
which covers all values of~$m$ that are likely to occur in practice.
\lemma
-$\alpha(m,n) \le a(n)+1$.
+$\alpha(m,n) \le \alpha(n)+1$.
\proof
$A(x,4\lceil m/n\rceil) \ge A(x,4) = A(x-1,A(x,3)) \ge A(x-1,x-1)$, so $A(x,4\lceil m/n\rceil)$
\qed
\lemma\id{alphaconst}%
-When $k$~is a~fixed constant and $m\ge n\cdot a(k,n)$, then $\alpha(m,n) \le k$.
+When $i$~is a~fixed constant and $m\ge n\cdot \lambda_i(n)$, then $\alpha(m,n) \le i$.
\proof
-The choice of~$m$ guarantees that $A(x,4\lceil m/n\rceil) \ge A(x,a(k,n))$, which
-is greater than $\log n$ for all $x \ge k$.
+The choice of~$m$ guarantees that $A(x,4\lceil m/n\rceil) \ge A(x,\lambda_i(n))$, which
+is greater than $\log n$ for all $x \ge i$.
\qed
\endpart