+of the program, $\O(\log N)$-bit integers suffice ($N$~is the size of the program's input).
+\qed
+
+\para
+There are also \df{randomized} versions of both machines. These are equipped
+with an~additional instruction for generating a~single random bit. The standard
+techniques of design and analysis of randomized algorithms apply (see for
+example Motwani and Raghavan~\cite{motwani:randalg}).
+
+\rem
+There is one more interesting machine: the \df{Immutable Pointer Machine} (see
+the description of LISP machines in \cite{benamram:pm}). It differs from the
+ordinary PM by the inability to modify existing memory cells. Only the contents
+of the registers are allowed to change. All cell modifications thus have to
+be performed by creating a~copy of the particular cell with some fields changed.
+This in turn requires the pointers to the cell to be updated, possibly triggering
+a~cascade of further cell copies. For example, when a~node of a~binary search tree is
+updated, all nodes on the path from that node to the root have to be copied.
+
+One of the advantages of this model is that the states of the machine are
+persistent --- it is possible to return to a~previously visited state by recalling
+the $\O(1)$ values of the registers (everything else could not have changed
+since that time) and ``fork'' the computations. This corresponds to the semantics
+of pure functional languages, e.g., Haskell~\cite{jones:haskell}.
+
+Unless we are willing to accept a~logarithmic penalty in execution time and space
+(in fact, our emulation of the Word-RAM on the PM can be easily made immutable),
+the design of efficient algorithms for the immutable PM requires very different
+techniques. Therefore, we will concentrate on the imperative models instead
+and refer the interested reader to the thorough treatment of purely functional
+data structures in the Okasaki's monograph~\cite{okasaki:funcds}.
+
+%--------------------------------------------------------------------------------
+
+\section{Bucket sorting and unification}\id{bucketsort}%
+
+The Contractive Bor\o{u}vka's algorithm (\ref{contbor}) needs to contract a~given
+set of edges in the current graph and then flatten the graph, all this in time $\O(m)$.
+We have spared the technical details for this section, in which we are going to
+explain several rather general techniques based on bucket sorting.
+
+As we have already suggested in the proof of Lemma \ref{contbor}, contractions
+can be performed in linear time by building an~auxiliary graph and finding its
+connected components. We will thus take care only of the subsequent flattening.
+
+\paran{Flattening on RAM}%
+On the RAM, we can view the edges as ordered pairs of vertex identifiers with the
+smaller of the identifiers placed first and sort them lexicographically. This brings
+parallel edges together, so that a~simple linear scan suffices to find each bunch
+of parallel edges and remove all but the lightest one.
+Lexicographic sorting of pairs can be accomplished in linear time by a~two-pass
+bucket sort with $n$~buckets corresponding to the vertex identifiers.
+
+However, there is a~catch in this. Suppose that we use the standard representation
+of graphs by adjacency lists whose heads are stored in an array indexed by vertex
+identifiers. When we contract and flatten the graph, the number of vertices decreases,
+but if we inherit the original vertex identifiers, the arrays will still have the
+same size. We could then waste a~super-linear amount of time by scanning the increasingly
+sparse arrays, most of the time skipping unused entries.
+
+To avoid this problem, we have to renumber the vertices after each contraction to component
+identifiers from the auxiliary graph and create a~new vertex array. This helps
+keep the size of the representation of the graph linear with respect to its current
+size.
+
+\paran{Flattening on PM}%
+The pointer representation of graphs does not suffer from sparsity since the vertices
+are always identified by pointers to per-vertex structures. Each such structure
+then contains all attributes associated with the vertex, including the head of its
+adjacency list. However, we have to find a~way how to perform bucket sorting
+without indexing of arrays.
+
+We will keep a~list of the per-vertex structures that defines the order of~vertices.
+Each such structure will be endowed with a~pointer to the head of the list of items in
+the corresponding bucket. Inserting an~edge to a~bucket can be then done in constant time
+and scanning the contents of all~$n$ buckets takes $\O(n+m)$ time.
+
+At last, we must not forget that while it was easy to \df{normalize} the pairs on the RAM
+by putting the smaller identifier first, this fails on the PM because we can directly
+compare the identifiers only for equality. We can work around this again by bucket-sorting:
+we sort the multiset $\{ (x,i) \mid \hbox{$x$~occurs in the $i$-th pair} \}$ on~$x$.
+Then we reset all pairs and re-insert the values back in their increasing order.
+This is also $\O(n+m)$.
+
+\paran{Tree isomorphism}%
+Another nice example of pointer-based radix sorting is a~Pointer Machine algorithm for
+deciding whether two rooted trees are isomorphic. Let us assume for a~moment that
+the outdegree of each vertex is at most a~fixed constant~$k$. We begin by sorting the subtrees
+of both trees by their depth. This can be accomplished by running depth-first search to calculate
+the depths and bucket-sorting them with $n$~buckets afterwards.
+
+Then we proceed from depth~0 to the maximum depth and for each of them we identify
+the isomorphism equivalence classes of subtrees of that particular depth. We will assign
+unique identifiers all such classes; at most~$n+1$ of them are needed as there are
+$n+1$~subtrees in the tree (including the empty subtree). As the PM does not
+have numbers as a~first-class type, we create a~``\df{yardstick}'' ---a~list
+of $n+1$~distinct items--- and we use pointers to these ``ticks'' as identifiers.
+When we are done, isomorphism of the whole trees can be decided by comparing the
+identifiers assigned to their roots.
+
+Suppose that classes of depths $0,\ldots,d-1$ are already computed and we want
+to identify those of depth~$d$. We will denote their count of~$n_d$. We take
+a~root of every such tree and label it with an~ordered $k$-tuple of identifiers
+of its subtrees; when it has less than $k$ sons, we pad the tuple with empty
+subtrees. Tuples corresponding to isomorphic subtrees are identical up to
+reordering of elements. We therefore sort the codes inside each tuple and then
+sort the tuples, which brings the equivalent tuples together.
+
+The first sort (inside the tuples) would be easy on the RAM, but on the PM we
+have to use the normalization trick mentioned above. The second sort is
+a~straightforward $k$-pass bucket sort.
+
+If we are not careful, a~single sorting pass takes $\O(n_d + n)$ time, because
+while we have only $n_d$~items to sort, we have to scan all $n$~buckets. This can
+be easily avoided if we realize that the order of buckets does not need to be
+fixed --- in every pass, we can use a~completely different order and it still
+does bring the equivalent tuples together. Thus we can keep a~list of buckets
+which are used in the current pass and look only inside these buckets. This way,
+we reduce the time spent in a~single pass to $\O(n_d)$ and the whole algorithm
+takes $\O(\sum_d n_d) = \O(n)$.
+
+Our algorithm can be easily modified for trees with unrestricted degrees.
+We replace the fixed $d$-tuples by general sequences of identifiers. The first
+sort does not need any changes. In the second sort, we proceed from the first
+position to the last one and after each bucket-sorting pass we put aside the sequences
+that have just ended. They are obviously not equivalent to any other sequences.
+The second sort is linear in the sum of the lengths of the sequences, which is
+$n_{d+1}$ for depth~$d$. We can therefore decide isomorphism of the whole trees
+in time $\O(\sum_d (n_d + n_{d+1})) = \O(n)$.
+
+The unification of sequences by bucket sorting will be useful in many
+other situations, so we will state it as a~separate lemma:
+
+\lemman{Sequence unification}\id{suniflemma}%
+Partitioning of a~collection of sequences $S_1,\ldots,S_n$, whose elements are
+arbitrary pointers and symbols from a~finite alphabet, to equality classes can
+be performed on the Pointer Machine in time $\O(n + \sum_i \vert S_i \vert)$.
+
+\rem
+The first linear-time algorithm that partitions all subtrees to isomorphism equivalence
+classes is probably due to Zemlayachenko \cite{zemlay:treeiso}, but it lacks many
+details. Dinitz et al.~\cite{dinitz:treeiso} have recast this algorithm in modern
+terminology and filled the gaps. Our algorithm is easier to formulate than those,
+because it replaces the need for auxiliary data structures by more elaborate bucket
+sorting.
+
+\paran{Topological graph computations}%
+Many graph algorithms are based on the idea of so called \df{micro/macro decomposition:}
+We decompose a~graph to subgraphs on roughly~$k$ vertices and solve the problem
+separately inside these ``micrographs'' and in the ``macrograph'' obtained by
+contraction of the micrographs. If $k$~is small enough, many of the micrographs
+are isomorphic, so we can compute the result only once for each isomorphism class
+and recycle it for all micrographs in that class. On the other hand, the macrograph
+is roughly $k$~times smaller than the original graph, so we can use a~less efficient
+algorithm and it will still run in linear time with respect to the size of the original
+graph.
+
+This kind of decomposition is traditionally used for trees, especially in the
+algorithms for the Lowest Common Ancestor problem (cf.~Section \ref{verifysect}
+and the survey paper \cite{alstrup:nca}) and for online maintenance of marked ancestors
+(cf.~Alstrup et al.~\cite{alstrup:marked}). Let us take a~glimpse at what happens when
+we decompose a~tree with $k$ set to~$1/4\cdot\log n$. There are at most $2^{2k} = \sqrt n$ non-isomorphic subtrees of size~$k$,
+because each isomorphism class is uniquely determined by the sequence of $2k$~up/down steps
+performed by depth-first search. Suppose that we are able to decompose the input and identify
+the equivalence classes of microtrees in linear time, then solve the problem in time $\O(\poly(k))$ for
+each microtree and finally in $\O(n'\log n')$ for the macrotree of size $n'=n/k$. When we put these pieces
+together, we get an~algorithm for the whole problem which achieves time complexity $\O(n
++ \sqrt{n}\cdot\poly(\log n) + n/\log n\cdot\log(n/\log n)) = \O(n)$.
+
+Decompositions are usually implemented on the RAM, because subgraphs can be easily
+encoded in numbers, which can be then used to index arrays containing the precomputed
+results. As the previous algorithm for subtree isomorphism shows, indexing is not strictly
+required for identifying equivalent microtrees and it can be replaced by bucket
+sorting on the Pointer Machine. Buchsbaum et al.~\cite{buchsbaum:verify} have extended
+this technique to general graphs in form of so called topological graph computations.
+Let us define them.
+
+\defn
+A~\df{graph computation} is a~function that takes a~\df{labeled undirected graph} as its input. The labels of its
+vertices and edges can be arbitrary symbols drawn from a~finite alphabet. The output
+of the computation is another labeling of the same graph. This time, the vertices and
+edges can be labeled with not only symbols of the alphabet, but also with pointers to the vertices
+and edges of the input graph, and possibly also with pointers to outside objects.
+A~graph computation is called \df{topological} if it produces isomorphic
+outputs for isomorphic inputs. The isomorphism of course has to preserve not only
+the structure of the graph, but also the labels in the obvious way.
+
+\obs
+The topological graph computations cover a~great variety of graph problems, ranging
+from searching for matchings or Eulerian tours to finding Hamilton circuits.
+The MST problem itself however does not belong to this class, because we do not have any means
+of representing the edge weights as labels, unless of course there is only a~fixed amount
+of possible values.
+
+As in the case of tree decompositions, we would like to identify the equivalent subgraphs
+and process only a~single instance from each equivalence class. The obstacle is that
+graph isomorphism is known to be computationally hard (it is one of the few
+problems that are neither known to lie in~$\rm P$ nor to be $\rm NP$-complete;
+see Arvind and Kurur \cite{arvind:isomorph} for recent results on its complexity).
+We will therefore manage with a~weaker form of equivalence, based on some sort
+of graph encodings:
+
+\defn
+A~\df{canonical encoding} of a~given labeled graph represented by adjacency lists
+is obtained by running the depth-first search on the graph and recording its traces.
+We start with an~empty encoding. When we enter
+a~vertex, we assign an~identifier to it (again using a~yardstick to represent numbers)
+and we append the label of this vertex to the encoding. Then we scan all back edges
+going from this vertex and append the identifiers of their destinations, accompanied
+by the edges' labels. Finally we append a~special terminator to mark the boundary
+between the code of this vertex and its successor.
+
+\obs
+The canonical encoding is well defined in the sense that non-iso\-morphic graphs always
+receive different encodings. Obviously, encodings of isomorphic graphs can differ,
+depending on the order of vertices and also of the adjacency lists. A~graph
+on~$n$ vertices with $m$~edges is assigned an~encoding of length at most $2n+2m$ ---
+for each vertex, we record its label and a~single terminator; edges contribute
+by identifiers and labels. These encodings can be constructed in linear time and
+in the same time we can also create a~graph corresponding to any encoding.
+We will use the encodings for our unification of graphs:
+
+\defn
+For a~collection~$\C$ of graphs, we define $\vert\C\vert$ as the number of graphs in
+the collection and $\Vert\C\Vert$ as their total size, i.e., $\Vert\C\Vert = \sum_{G\in\C} n(G) + m(G)$.
+
+\lemman{Graph unification}\id{guniflemma}%
+A~collection~$\C$ of labeled graphs can be partitioned into classes which share the same
+canonical encoding in time $\O(\Vert\C\Vert)$ on the Pointer Machine.
+
+\proof
+Construct canonical encodings of all the graphs and then apply the Sequence unification lemma
+(\ref{suniflemma}) on them.
+\qed
+
+\para
+When we want to perform a~topological computation on a~collection~$\C$ of graphs
+with $k$~vertices, we first precompute its result for a~collection~$\cal G$ of \df{generic graphs}
+corresponding to all possible canonical encodings on $k$~vertices. Then we use unification to match
+the \df{actual graphs} in~$\C$ to the generic graphs in~$\cal G$. This gives us the following
+theorem:
+
+\thmn{Topological computations, Buchsbaum et al.~\cite{buchsbaum:verify}}\id{topothm}%
+Suppose that we have a~topological graph computation~$\cal T$ that can be performed in time
+$T(k)$ for graphs on $k$~vertices. Then we can run~$\cal T$ on a~collection~$\C$
+of labeled graphs on~$k$ vertices in time $\O(\Vert\C\Vert + (k+s)^{k(k+2)}\cdot (T(k)+k^2))$,
+where~$s$ is a~constant depending only on the number of symbols used as vertex/edge labels.
+
+\proof
+A~graph on~$k$ vertices has less than~$k^2/2$ edges, so the canonical encodings of
+all such graphs are shorter than $2k + 2k^2/2 = k(k+2)$. Each element of the encoding
+is either a~vertex identifier, or a~symbol, or a~separator, so it can attain at most $k+s$
+possible values for some fixed~$s$.
+We can therefore enumerate all possible encodings and convert them to a~collection $\cal G$
+of all generic graphs such that $\vert{\cal G}\vert \le (k+s)^{k(k+2)}$ and $\Vert{\cal G}\Vert
+\le \vert{\cal G}\vert \cdot k^2$.
+
+We run the computation on all generic graphs in time $\O(\vert{\cal G}\vert \cdot T(k))$
+and then we use the Unification lemma (\ref{guniflemma}) on the union of the collections
+$\C$ and~$\cal G$ to match the generic graphs with the equivalent actual graphs in~$\C$
+in time $\O(\Vert\C\Vert + \Vert{\cal G}\Vert)$.
+Finally we create a~copy of the generic result for each of the actual graphs.
+If the computation uses pointers to the input vertices in its output, we have to
+redirect them to the actual input vertices, which we can do by associating
+the output vertices that refer to an~input vertex with the corresponding places
+in the encoding of the input graph. This way, the whole output can be generated in time
+$\O(\Vert\C\Vert + \Vert{\cal G}\Vert)$.
+\qed
+
+\rem
+The topological computations and the Graph unification lemma will play important
+roles in Sections \ref{verifysect} and \ref{optalgsect}.
+
+%--------------------------------------------------------------------------------
+
+\section{Data structures on the RAM}
+\id{ramdssect}
+
+There is a~lot of data structures designed specifically for the RAM. These structures
+take advantage of both indexing and arithmetics and they often surpass the known
+lower bounds for the same problem on the~PM. In many cases, they achieve constant time
+per operation, at least when either the magnitude of the values or the size of
+the data structure are suitably bounded.
+
+A~classical result of this type are the trees of van Emde Boas~\cite{boas:vebt}
+which represent a~subset of the integers $\{0,\ldots,U-1\}$. They allow insertion,
+deletion and order operations (minimum, maximum, successor etc.) in time $\O(\log\log U)$,
+regardless of the size of the subset. If we replace the heap used in the Jarn\'\i{}k's
+algorithm (\ref{jarnik}) by this structure, we immediately get an~algorithm
+for finding the MST in integer-weighted graphs in time $\O(m\log\log w_{max})$,
+where $w_{max}$ is the maximum weight.
+
+A~real breakthrough has been however made by Fredman and Willard who introduced
+the Fusion trees~\cite{fw:fusion}. They again perform membership and predecessor
+operation on a~set of $n$~integers, but with time complexity $\O(\log_W n)$
+per operation on a~Word-RAM with $W$-bit words. This of course assumes that
+each element of the set fits in a~single word. As $W$ must at least~$\log n$,
+the operations take $\O(\log n/\log\log n)$ time and thus we are able to sort $n$~integers
+in time~$o(n\log n)$. This was a~beginning of a~long sequence of faster and
+faster sorting algorithms, culminating with the work by Thorup and Han.
+They have improved the time complexity of integer sorting to $\O(n\log\log n)$ deterministically~\cite{han:detsort}
+and expected $\O(n\sqrt{\log\log n})$ for randomized algorithms~\cite{hanthor:randsort},
+both in linear space.
+
+The Fusion trees themselves have very limited use in graph algorithms, but the
+principles behind them are ubiquitous in many other data structures and these
+will serve us well and often. We are going to build the theory of Q-heaps in
+Section \ref{qheaps}, which will later lead to a~linear-time MST algorithm
+for arbitrary integer weights in Section \ref{iteralg}. Other such structures
+will help us in building linear-time RAM algorithms for computing the ranks
+of various combinatorial structures in Chapter~\ref{rankchap}.
+
+Outside our area, important consequences of these data structures include the
+Thorup's $\O(m)$ algorithm for single-source shortest paths in undirected
+graphs with positive integer weights \cite{thorup:usssp} and his $\O(m\log\log
+n)$ algorithm for the same problem in directed graphs \cite{thorup:sssp}. Both
+algorithms have been then significantly simplified by Hagerup
+\cite{hagerup:sssp}.
+
+Despite the progress in the recent years, the corner-stone of all RAM structures
+is still the representation of combinatorial objects by integers introduced by
+Fredman and Willard. It will also form a~basis for the rest of this chapter.
+
+%--------------------------------------------------------------------------------
+
+\section{Bits and vectors}\id{bitsect}
+
+In this rather technical section, we will show how the RAM can be used as a~vector
+computer to operate in parallel on multiple elements, as long as these elements
+fit in a~single machine word. At the first sight this might seem useless, because we
+cannot require large word sizes, but surprisingly often the elements are small
+enough relative to the size of the algorithm's input and thus also relative to
+the minimum possible word size. Also, as the following lemma shows, we can
+easily emulate slightly longer words:
+
+\lemman{Multiple-precision calculations}
+Given a~RAM with $W$-bit words, we can emulate all calculation and control
+instructions of a~RAM with word size $kW$ in time depending only on the~$k$.
+(This is usually called \df{multiple-precision arithmetics.})
+
+\proof
+We split each word of the ``big'' machine to $W'$-bit blocks, where $W'=W/2$, and store these
+blocks in $2k$ consecutive memory cells. Addition, subtraction, comparison and
+bitwise logical operations can be performed block-by-block. Shifts by a~multiple
+of~$W'$ are trivial, otherwise we can combine each block of the result from
+shifted versions of two original blocks.
+To multiply two numbers, we can use the elementary school algorithm using the $W'$-bit
+blocks as digits in base $2^{W'}$ --- the product of any two blocks fits
+in a~single word.
+
+Division is harder, but Newton-Raphson iteration (see~\cite{ito:newrap})
+converges to the quotient in a~constant number of iterations, each of them
+involving $\O(1)$ multiple-precision additions and multiplications. A~good
+starting approximation can be obtained by dividing the two most-significant
+(non-zero) blocks of both numbers.
+
+Another approach to division is using the improved elementary school algorithm as described
+by Knuth in~\cite{knuth:seminalg}. It uses $\O(k^2)$ steps, but the steps involve
+calculation of the most significant bit set in a~word. We will show below that it
+can be done in constant time, but we have to be careful to avoid division instructions.
+\qed
+
+\notan{Bit strings}\id{bitnota}%
+We will work with binary representations of natural numbers by strings over the
+alphabet $\{\0,\1\}$: we will use $\(x)$ for the number~$x$ written in binary,
+$\(x)_b$ for the same padded to exactly $b$ bits by adding leading zeroes,
+and $x[k]$ for the value of the $k$-th bit of~$x$ (with a~numbering of bits such that $2^k[k]=1$).
+The usual conventions for operations on strings will be utilized: When $s$
+and~$t$ are strings, we write $st$ for their concatenation and
+$s^k$ for the string~$s$ repeated $k$~times.
+When the meaning is clear from the context,
+we will use $x$ and $\(x)$ interchangeably to avoid outbreak of symbols.
+
+\defn
+The \df{bitwise encoding} of a~vector ${\bf x}=(x_0,\ldots,x_{d-1})$ of~$b$-bit numbers
+is an~integer~$x$ such that $\(x)=\(x_{d-1})_b\0\(x_{d-2})_b\0\ldots\0\(x_0)_b$, i.e.,
+$x = \sum_i 2^{(b+1)i}\cdot x_i$. (We have interspersed the elements with \df{separator bits.})
+
+\notan{Vectors}\id{vecnota}%
+We will use boldface letters for vectors and the same letters in normal type
+for their encodings. The elements of a~vector~${\bf x}$ will be written as
+$x_0,\ldots,x_{d-1}$.
+
+\para
+If we want to fit the whole vector in a~single word, the parameters $b$ and~$d$ must satisfy
+the condition $(b+1)d\le W$.
+By using multiple-precision arithmetics, we can encode all vectors satisfying $bd=\O(W)$.
+We will now describe how to translate simple vector manipulations to sequences of $\O(1)$ RAM operations
+on their codes. As we are interested in asymptotic complexity only, we prefer clarity
+of the algorithms over saving instructions. Among other things, we freely use calculations
+on words of size $\O(bd)$, assuming that the Multiple-precision lemma comes to save us
+when necessary.
+
+\para
+First of all, let us observe that we can use $\band$ and $\bor$ with suitable constants
+to write zeroes or ones to an~arbitrary set of bit positions at once. These operations
+are usually called \df{bit masking}. Also, any element of a~vector can be extracted or
+replaced by a~different value in $\O(1)$ time by masking and shifts.
+
+\newdimen\slotwd
+\def\setslot#1{\setbox0=#1\slotwd=\wd0}
+\def\slot#1{\hbox to \slotwd{\hfil #1\hfil}}
+\def\[#1]{\slot{$#1$}}
+\def\9{\rack{\0}{\hss$\cdot$\hss}}
+
+\def\alik#1{%
+\medskip
+\halign{\hskip 0.15\hsize\hfil $ ##$&\hbox to 0.6\hsize{${}##$ \hss}\cr
+#1}
+\medskip
+}
+
+\algn{Operations on vectors with $d$~elements of $b$~bits each}\id{vecops}
+
+\itemize\ibull
+
+\:$\<Replicate>(\alpha)$ --- creates a~vector $(\alpha,\ldots,\alpha)$:
+
+\alik{\<Replicate>(\alpha)=\alpha\cdot(\0^b\1)^d. \cr}
+
+\:$\<Sum>(x)$ --- calculates the sum of the elements of~${\bf x}$, assuming that
+the result fits in $b$~bits:
+
+\alik{\<Sum>(x) = x \bmod \1^{b+1}. \cr}
+
+This is correct because when we calculate modulo~$\1^{b+1}$, the number $2^{b+1}=\1\0^{b+1}$
+is congruent to~1 and thus $x = \sum_i 2^{(b+1)i}\cdot x_i \equiv \sum_i 1^i\cdot x_i \equiv \sum_i x_i$.
+As the result should fit in $b$~bits, the modulo makes no difference.
+
+If we want to avoid division, we can use double-precision multiplication instead:
+
+\setslot{\hbox{~$\0x_{d-1}$}}
+\def\.{\[]}
+\def\z{\[\0^b\1]}
+\def\dd{\slot{$\cdots$}}
+\def\vd{\slot{$\vdots$}}
+\def\rule{\noalign{\medskip\nointerlineskip}$\hrulefill$\cr\noalign{\nointerlineskip\medskip}}
+
+\alik{
+\[\0x_{d-1}] \dd \[\0x_2] \[\0x_1] \[\0x_0] \cr
+*~~ \z \dd \z\z\z \cr
+\rule
+\[x_{d-1}] \dd \[x_2] \[x_1] \[x_0] \cr
+\[x_{d-1}] \[x_{d-2}] \dd \[x_1] \[x_0] \. \cr
+\[x_{d-1}] \[x_{d-2}] \[x_{d-3}] \dd \[x_0] \. \. \cr
+\vd\vd\vd\vd\.\.\.\cr
+\[x_{d-1}] \dd \[x_2]\[x_1]\[x_0] \. \. \. \. \cr
+\rule
+\[r_{d-1}] \dd \[r_2] \[r_1] \[s_d] \dd \[s_3] \[s_2] \[s_1] \cr
+}
+
+This way, we even get the vector of all partial sums:
+$s_k=\sum_{i=0}^{k-1}x_i$, $r_k=\sum_{i=k}^{d-1}x_i$.
+
+\:$\<Cmp>(x,y)$ --- element-wise comparison of~vectors ${\bf x}$ and~${\bf y}$,
+i.e., a~vector ${\bf z}$ such that $z_i=1$ if $x_i<y_i$ and $z_i=0$ otherwise.
+
+We replace the separator zeroes in~$x$ by ones and subtract~$y$. These ones
+change back to zeroes exactly at the positions where $x_i<y_i$ and they stop
+carries from propagating, so the fields do not interact with each other:
+
+\setslot{\vbox{\hbox{~$x_{d-1}$}\hbox{~$y_{d-1}$}}}
+\def\9{\rack{\0}{\hss ?\hss}}
+\alik{
+ \1 \[x_{d-1}] \1 \[x_{d-2}] \[\cdots] \1 \[x_1] \1 \[x_0] \cr
+-~ \0 \[y_{d-1}] \0 \[y_{d-2}] \[\cdots] \0 \[y_1] \0 \[y_0] \cr
+\rule
+ \9 \[\ldots] \9 \[\ldots] \[\cdots] \9 \[\ldots] \9 \[\ldots] \cr
+}
+
+It only remains to shift the separator bits to the right positions, negate them
+and mask out all other bits.
+
+\:$\<Rank>(x,\alpha)$ --- returns the number of elements of~${\bf x}$ which are less than~$\alpha$,
+assuming that the result fits in~$b$ bits:
+
+\alik{
+\<Rank>(x,\alpha) = \<Sum>(\<Cmp>(x,\<Replicate>(\alpha))). \cr
+}
+
+\:$\<Insert>(x,\alpha)$ --- inserts~$\alpha$ into a~sorted vector $\bf x$:
+
+We calculate the rank of~$\alpha$ in~$x$ first, then we insert~$\alpha$ as the $k$-th
+field of~$\bf x$ using masking operations and shifts.
+
+\algo
+\:$k\=\<Rank>(x,\alpha)$.
+\:$\ell\=x \band \1^{(b+1)(n-k-1)}\0^{(b+1)(k+1)}$. \cmt{``left'' part of the vector}
+\:$r=x \band \1^{(b+1)k}$. \cmt{``right'' part}
+\:Return $(\ell\shl (b+1)) \bor (\alpha\shl ((b+1)k)) \bor r$.
+\endalgo
+
+\:$\<Unpack>(\alpha)$ --- creates a~vector whose elements are the bits of~$\(\alpha)_d$.
+In other words, inserts blocks~$\0^b$ between the bits of~$\alpha$. Assuming that $b\ge d$,
+we can do it as follows:
+
+\algo
+\:$x\=\<Replicate>(\alpha)$.
+\:$y\=(2^{b-1},2^{b-2},\ldots,2^0)$. \cmt{bitwise encoding of this vector}
+\:$z\=x \band y$.
+\:Return $\<Cmp>(z,y)$.
+\endalgo
+
+Let us observe that $z_i$ is either zero or equal to~$y_i$ depending on the value
+of the $i$-th bit of the number~$\alpha$. Comparing it with~$y_i$ normalizes it
+to either zero or one.
+
+\:$\<Unpack>_\pi(\alpha)$ --- like \<Unpack>, but changes the order of the
+bits according to a~fixed permutation~$\pi$: The $i$-th element of the
+resulting vector is equal to~$\alpha[\pi(i)]$.
+
+Implemented as above, but with a~mask $y=(2^{\pi(b-1)},\ldots,2^{\pi(0)})$.
+
+\:$\<Pack>(x)$ --- the inverse of \<Unpack>: given a~vector of zeroes and ones,
+it produces a~number whose bits are the elements of the vector (in other words,
+it crosses out the $\0^b$ blocks).
+
+We interpret the~$x$ as an~encoding of a~vector with elements one bit shorter
+and we sum these elements. For example, when $n=4$ and~$b=4$:
+
+\setslot{\hbox{$x_3$}}
+\def\z{\[\0]}
+\def\|{\hskip1pt\vrule height 10pt depth 4pt\hskip1pt}
+\def\.{\hphantom{\|}}
+
+\alik{
+\|\z\.\z\.\z\.\z\.\[x_3]\|\z\.\z\.\z\.\z\.\[x_2]\|\z\.\z\.\z\.\z\[x_1]\|\z\.\z\.\z\.\z\.\[x_0]\|\cr
+\|\z\.\z\.\z\.\z\|\[x_3]\.\z\.\z\.\z\|\z\.\[x_2]\.\z\.\z\|\z\.\z\[x_1]\.\z\|\z\.\z\.\z\.\[x_0]\|\cr
+}
+
+However, this ``reformatting'' does not produce a~correct encoding of a~vector,
+because the separator zeroes are missing. For this reason, the implementation
+of~\<Sum> using modulo does not work correctly (it produces $\0^b$ instead of $\1^b$).
+We therefore use the technique based on multiplication instead, which does not need
+the separators. (Alternatively, we can observe that $\1^b$ is the only case
+affected, so we can handle it separately.)
+
+\endlist
+
+\para
+We can use the aforementioned tricks to perform interesting operations on individual
+numbers in constant time, too. Let us assume for a~while that we are
+operating on $b$-bit numbers and the word size is at least~$b^2$.
+This enables us to make use of intermediate vectors with $b$~elements
+of $b$~bits each.
+
+\algn{Integer operations in quadratic workspace}\id{lsbmsb}
+
+\itemize\ibull
+
+\:$\<Weight>(\alpha)$ --- computes the Hamming weight of~$\alpha$, i.e., the number of ones in~$\(\alpha)$.
+
+We perform \<Unpack> and then \<Sum>.
+
+\:$\<Permute>_\pi(\alpha)$ --- shuffles the bits of~$\alpha$ according
+to a~fixed permutation~$\pi$.
+
+We perform $\<Unpack>_\pi$ and \<Pack> back.
+
+\:$\<LSB>(\alpha)$ --- finds the least significant bit of~$\alpha$,
+i.e., the smallest~$i$ such that $\alpha[i]=1$.
+
+By a~combination of subtraction with $\bxor$, we create a~number
+that contains ones exactly at the position of $\<LSB>(\alpha)$ and below:
+
+\alik{
+\alpha&= \9\9\9\9\9\1\0\0\0\0\cr
+\alpha-1&= \9\9\9\9\9\0\1\1\1\1\cr
+\alpha\bxor(\alpha-1)&= \0\9\9\9\0\1\1\1\1\1\cr
+}
+
+Then we calculate the \<Weight> of the result and subtract~1.
+
+\:$\<MSB>(\alpha)$ --- finds the most significant bit of~$\alpha$ (the position
+of the highest bit set).
+
+Reverse the bits of the number~$\alpha$ first by calling \<Permute>, then apply \<LSB>
+and subtract the result from~$b-1$.
+
+\endlist
+
+\rem
+As noted by Brodnik~\cite{brodnik:lsb} and others, the space requirements of
+the \<LSB> operation can be reduced to linear. We split the input to $\sqrt{b}$
+blocks of $\sqrt{b}$ bits each. Then we determine which blocks are non-zero and
+identify the lowest such block (this is a~\<LSB> of a~number whose bits
+correspond to the blocks). Finally we calculate the \<LSB> of this block. In
+both calls to \<LSB,> we have a $\sqrt{b}$-bit number in a~$b$-bit word, so we
+can use the previous algorithm. The same trick of course works for finding the
+\<MSB> as well.
+
+The following algorithm shows the details.
+
+\algn{LSB in linear workspace}
+
+\algo\id{lsb}
+\algin A~$w$-bit number~$\alpha$.
+\:$b\=\lceil\sqrt{w}\,\rceil$. \cmt{size of a~block}
+\:$\ell\=b$. \cmt{the number of blocks is the same}
+\:$x\=(\alpha \band (\0\1^b)^\ell) \bor (\alpha \band (\1\0^b)^\ell)$.
+\hfill\break
+\cmt{encoding of a~vector~${\bf x}$ such that $x_i\ne 0$ iff the $i$-th block is non-zero}%
+\foot{Why is this so complicated? It is tempting to take $\alpha$ itself as a~code of this vector,
+but we unfortunately need the separator bits between elements, so we create them and
+relocate the bits we have overwritten.}
+\:$y\=\<Cmp>(0,x)$. \cmt{$y_i=1$ if the $i$-th block is non-zero, otherwise $y_0=0$}
+\:$\beta\=\<Pack>(y)$. \cmt{each block compressed to a~single bit}
+\:$p\=\<LSB>(\beta)$. \cmt{the index of the lowest non-zero block}
+\:$\gamma\=(\alpha \shr bp) \band \1^b$. \cmt{the contents of that block}
+\:$q\=\<LSB>(\gamma)$. \cmt{the lowest bit set there}
+\algout $\<LSB>(\alpha) = bp+q$.
+\endalgo
+
+\rem
+We have used a~plenty of constants that depend on the format of the vectors.
+Either we can write non-uniform programs (see \ref{nonuniform}) and use native constants,
+or we can observe that all such constants can be easily manufactured. For example,
+$(\0^b\1)^d = \1^{(b+1)d} / \1^{b+1} = (2^{(b+1)d}-1)/(2^{b+1}-1)$. The only exceptions
+are the~$w$ and~$b$ in the LSB algorithm \ref{lsb}, which we are unable to produce
+in constant time. In practice we use the ``bit tricks'' as frequently called subroutines
+in an~encompassing algorithm, so we usually can spend a~lot of time on the precalculation
+of constants performed once during algorithm startup.
+
+\rem
+The history of combining arithmetic and logical operations to obtain fast programs for various
+interesting functions is blurred. Many of the ``bit tricks'' we have described have been
+discovered independently by numerous people in the early ages of digital computers.
+Since then, they have become a~part of the computer science folklore. Probably the
+earliest documented occurrence is in the 1972's memo of the MIT Artificial Intelligence
+Lab \cite{hakmem}. However, until the work of Fredman and Willard nobody seemed to
+realize the full consequences.
+
+%--------------------------------------------------------------------------------
+
+\section{Q-Heaps}\id{qheaps}%
+
+We have shown how to perform non-trivial operations on a~set of values
+in constant time, but so far only under the assumption that the number of these
+values is small enough and that the values themselves are also small enough
+(so that the whole set fits in $\O(1)$ machine words). Now we will show how to
+lift the restriction on the magnitude of the values and still keep constant time
+complexity. We will describe a~slightly simplified version of the Q-heaps developed by
+Fredman and Willard in~\cite{fw:transdich}.
+
+The Q-heap represents a~set of at most~$k$ word-sized integers, where $k\le W^{1/4}$
+and $W$ is the word size of the machine. It will support insertion, deletion, finding
+of minimum and maximum, and other operations described below, in constant time, provided that
+we are willing to spend~$\O(2^{k^4})$ time on preprocessing.
+
+The exponential-time preprocessing may sound alarming, but a~typical application uses
+Q-heaps of size $k=\log^{1/4} N$, where $N$ is the size of the algorithm's input.
+This guarantees that $k\le W^{1/4}$ and $\O(2^{k^4}) = \O(N)$. Let us however
+remark that the whole construction is primarily of theoretical importance
+and that the huge constants involved everywhere make these heaps useless
+in practical algorithms. Despise this, many of the tricks we develop have proven
+themselves useful even in real-life implementations.
+
+Spending the time on reprocessing makes it possible to precompute tables for
+almost arbitrary functions and then assume that they can be evaluated in
+constant time:
+
+\lemma\id{qhprecomp}%
+When~$f$ is a~function computable in polynomial time, $\O(2^{k^4})$ time is enough
+to precompute a~table of the values of~$f$ for all arguments whose size is $\O(k^3)$ bits.
+
+\proof
+There are $2^{\O(k^3)}$ possible combinations of arguments of the given size and for each of
+them we spend $\poly(k)$ time on calculating the function. It remains
+to observe that $2^{\O(k^3)}\cdot \poly(k) = \O(2^{k^4})$.