Traditionally, computer scientists use a~variety of computational models
as a~formalism in which their algorithms are stated. If we were studying
-NP-completeness, we could safely assume that all the models are equivalent,
+NP-complete\-ness, we could safely assume that all the models are equivalent,
possibly up to polynomial slowdown which is negligible. In our case, the
differences between good and not-so-good algorithms are on a~much smaller
scale. In this chapter, we will replace the usual ``tape measure'' by a~micrometer,
\:Both restrictions at once.
\endlist
-Thorup discusses the usual techniques employed by RAM algorithms in~\cite{thorup:aczero}
+Thorup \cite{thorup:aczero} discusses the usual techniques employed by RAM algorithms
and he shows that they work on both Word-RAM and ${\rm AC}^0$-RAM, but the combination
of the two restrictions is too weak. On the other hand, the intersection of~${\rm AC}^0$
with the instruction set of modern processors is already strong enough (e.g., when we
algorithms, but the quadratic bound is good enough for our purposes.}
\qed
-\FIXME{Add references, especially to the unbounded parallelism remark.}
-
\thm
Every program for the PM running in polynomial time can be translated to a~program
computing the same on the Word-RAM with only $\O(1)$ slowdown.
techniques of design and analysis of randomized algorithms apply (see for
example Motwani and Raghavan~\cite{motwani:randalg}).
-\FIXME{Consult sources. Does it make more sense to generate random words at once on the RAM?}
-
\rem
There is one more interesting machine: the \df{Immutable Pointer Machine} (see
the description of LISP machines in \cite{benamram:pm}). It differs from the
of graph encodings:
\defn
-A~\df{canonical encoding} of a~given labeled graph represented by adjancency lists
+A~\df{canonical encoding} of a~given labeled graph represented by adjacency lists
is obtained by running the depth-first search on the graph and recording its traces.
We start with an~empty encoding. When we enter
a~vertex, we assign an~identifier to it (again using a~yardstick to represent numbers)
the \df{actual graphs} in~$\C$ to the generic graphs in~$\cal G$. This gives us the following
theorem:
-\thmn{Batched topological computations, Buchsbaum et al.~\cite{buchsbaum:verify}}\id{topothm}%
+\thmn{Topological computations, Buchsbaum et al.~\cite{buchsbaum:verify}}\id{topothm}%
Suppose that we have a~topological graph computation~$\cal T$ that can be performed in time
$T(k)$ for graphs on $k$~vertices. Then we can run~$\cal T$ on a~collection~$\C$
of labeled graphs on~$k$ vertices in time $\O(\Vert\C\Vert + (k+s)^{k(k+2)}\cdot (T(k)+k^2))$,
both in linear space.
The Fusion trees themselves have very limited use in graph algorithms, but the
-principles behind them are ubiquitious in many other data structures and these
+principles behind them are ubiquitous in many other data structures and these
will serve us well and often. We are going to build the theory of Q-heaps in
Section \ref{qheaps}, which will later lead to a~linear-time MST algorithm
for arbitrary integer weights in Section \ref{iteralg}. Other such structures
$x_0,\ldots,x_{d-1}$.
\para
-If we want to fit the whole vector in a~single word, the parameters $b$ and~$d$ must satisty
+If we want to fit the whole vector in a~single word, the parameters $b$ and~$d$ must satisfy
the condition $(b+1)d\le W$.
By using multiple-precision arithmetics, we can encode all vectors satisfying $bd=\O(W)$.
We will now describe how to translate simple vector manipulations to sequences of $\O(1)$ RAM operations
in an~encompassing algorithm, so we usually can spend a~lot of time on the precalculation
of constants performed once during algorithm startup.
+\rem
+The history of combining arithmetic and logical operations to obtain fast programs for various
+interesting functions is blurred. Many of the ``bit tricks'' we have described have been
+discovered independently by numerous people in the early ages of digital computers.
+Since then, they have become a~part of the computer science folklore. Probably the
+earliest documented occurrence is in the 1972's memo of the MIT Artificial Intelligence
+Lab \cite{hakmem}. However, until the work of Fredman and Willard nobody seemed to
+realize the full consequences.
+
%--------------------------------------------------------------------------------
\section{Q-Heaps}\id{qheaps}%
A~\df{compressed trie} is obtained from the trie by removing the vertices of outdegree~1
except for the root and marked vertices.
-Whereever is a~directed path whose internal vertices have outdegree~1 and they carry
-no mark, we replace this path by a~single edge labeled with the contatenation
+Wherever is a~directed path whose internal vertices have outdegree~1 and they carry
+no mark, we replace this path by a~single edge labeled with the concatenation
of the original edges' labels.
In both kinds of tries, we order the outgoing edges of every vertex by their labels
of the trie which is compact enough.
\lemma\id{citree}%
-The trie is uniquely determined by the order of the guides~$g_1,\ldots,g_{n-1}$.
+The compressed trie is uniquely determined by the order of the guides~$g_1,\ldots,g_{n-1}$.
\proof
We already know that the letter depths of the trie vertices are exactly