Traditionally, computer scientists use a~variety of computational models
as a~formalism in which their algorithms are stated. If we were studying
-NP-completeness, we could safely assume that all the models are equivalent,
+NP-complete\-ness, we could safely assume that all the models are equivalent,
possibly up to polynomial slowdown which is negligible. In our case, the
differences between good and not-so-good algorithms are on a~much smaller
scale. In this chapter, we will replace the usual ``tape measure'' by a~micrometer,
\:Both restrictions at once.
\endlist
-Thorup discusses the usual techniques employed by RAM algorithms in~\cite{thorup:aczero}
+Thorup \cite{thorup:aczero} discusses the usual techniques employed by RAM algorithms
and he shows that they work on both Word-RAM and ${\rm AC}^0$-RAM, but the combination
of the two restrictions is too weak. On the other hand, the intersection of~${\rm AC}^0$
with the instruction set of modern processors is already strong enough (e.g., when we
the \df{actual graphs} in~$\C$ to the generic graphs in~$\cal G$. This gives us the following
theorem:
-\thmn{Batched topological computations, Buchsbaum et al.~\cite{buchsbaum:verify}}\id{topothm}%
+\thmn{Topological computations, Buchsbaum et al.~\cite{buchsbaum:verify}}\id{topothm}%
Suppose that we have a~topological graph computation~$\cal T$ that can be performed in time
$T(k)$ for graphs on $k$~vertices. Then we can run~$\cal T$ on a~collection~$\C$
of labeled graphs on~$k$ vertices in time $\O(\Vert\C\Vert + (k+s)^{k(k+2)}\cdot (T(k)+k^2))$,
of the trie which is compact enough.
\lemma\id{citree}%
-The trie is uniquely determined by the order of the guides~$g_1,\ldots,g_{n-1}$.
+The compressed trie is uniquely determined by the order of the guides~$g_1,\ldots,g_{n-1}$.
\proof
We already know that the letter depths of the trie vertices are exactly