5 \chapter{Fine Details of Computation}
7 \section{Models and machines}
9 Traditionally, computer scientists use a~variety of computational models
10 as a~formalism in which their algorithms are stated. If we were studying
11 NP-completeness, we could safely assume that all the models are equivalent,
12 possibly up to polynomial slowdown which is negligible. In our case, the
13 differences between good and not-so-good algorithms are on a~much smaller
14 scale. In this chapter, we will replace the usual ``yardsticks'' by a~micrometer,
15 state our computation models carefully and develop a repertoire of basic
16 data structures taking advantage of the fine details of the models.
18 We would like to keep the formalism close enough to the reality of the contemporary
19 computers. This rules out Turing machines and similar sequentially addressed
20 models, but even the remaining models are subtly different from each other. For example, some of them
21 allow indexing of arrays in constant time, while on the others,
22 arrays have to be emulated with pointer structures, requiring $\Omega(\log n)$
23 time to access a~single element of an~$n$-element array. It is hard to say which
24 way is superior --- while most ``real'' computers have instructions for constant-time
25 indexing, it seems to be physically impossible to fulfil this promise regardless of
26 the size of memory. Indeed, at the level of logical gates inside the computer,
27 the depth of the actual indexing circuits is logarithmic.
29 In recent decades, most researchers in the area of combinatorial algorithms
30 have been considering two computational models: the Random Access Machine and the Pointer
31 Machine. The former is closer to the programmer's view of a~real computer,
32 the latter is slightly more restricted and ``asymptotically safe.''
33 We will follow this practice and study our algorithms in both models.
36 The \df{Random Access Machine (RAM)} is not a~single coherent model, but rather a~family
37 of closely related machines, sharing the following properties.
38 (See Cook and Reckhow \cite{cook:ram} for one of the usual formal definitions
39 and Hagerup \cite{hagerup:wordram} for a~thorough description of the differences
40 between the RAM variants.)
42 The \df{memory} of the machine is represented by an~array of \df{memory cells}
43 addressed by non-negative integers, each of them containing a~single non-negative integer.
44 The \df{program} is a~finite sequence of \df{instructions} of two basic kinds: calculation
45 instructions and control instructions.
47 \df{Calculation instructions} have two source arguments and one destination
48 argument, each \df{argument} being either an~immediate constant (not available
49 as destination), a~directly addressed memory cell (specified by its number)
50 or an~indirectly addressed memory cell (its address is stored in a~directly
51 addressed memory cell).
53 \df{Control instructions} include branches (to a~specific instruction in
54 the program), conditional branches (e.g., jump if two arguments specified as
55 in the calculation instructions are equal) and an~instruction to halt the program.
57 At the beginning of the computation, the memory contains the input data
58 in specified cells and arbitrary values in all other cells.
59 Then the program is executed one instruction at a~time. When it halts,
60 specified memory cells are interpreted as the program's output.
63 In the description of the RAM family, we have omitted several details
64 on~purpose, because different members of the family define them differently.
65 These are: the size of the available integers, the time complexity of a~single
66 instruction, the space complexity assigned to a~single memory cell and the set
67 of operations available in calculation instructions.
69 If we impose no limits on the magnitude of the numbers and we assume that
70 arithmetic and logical operations work on them in constant time, we get
71 a~very powerful parallel computer --- we can emulate an~exponential number
72 of parallel processors using arithmetics and suddenly almost everything can be
73 computed in constant time, modulo encoding and decoding of input and output.
74 Such models are unrealistic and there are two basic possibilities how to
78 \:Keep unbounded numbers, but increase costs of instructions: each instruction
79 consumes time proportional to the number of bits of the numbers it processes,
80 including memory addresses. Similarly, space usage is measured in bits,
81 counting not only the values, but also the addresses of the respective memory
83 \:Place a~limit on the size of the numbers ---define the \df{word size~$W$,}
84 the number of bits available in the memory cells--- and keep the cost of
85 instructions and memory cells constant. The word size must not be constant,
86 since we can address only~$2^W$ cells of memory. If the input of the algorithm
87 is stored in~$N$ cells, we need~$W\ge\log N$ just to be able to read the input.
88 On the other hand, we are interested in polynomial-time algorithms only, so $\Theta(\log N)$-bit
89 numbers should be sufficient. In practice, we pick~$W$ to be the larger of
90 $\Theta(\log N)$ and the size of integers used in the algorithm's input and output.
91 We will call an integer which fits in a~single memory cell a~\df{machine word.}
94 Both restrictions easily avoid the problems of unbounded parallelism. The first
95 choice is theoretically cleaner and Cook et al.~show nice correspondences to the
96 standard complexity classes, but the calculations of time and space complexity tend
97 to be somewhat tedious. What more, when compared with the RAM with restricted
98 word size, the complexities are usually exactly $\Theta(W)$ times higher.
99 This does not hold in general (consider a~program which uses many small numbers
100 and $\O(1)$ large ones), but it is true for the algorithms we are interested in.
101 Therefore we will always assume that the operations have unit cost and we make
102 sure that all numbers are limited by the available word size.
105 As for the choice of RAM operations, the following three instruction sets are often used:
108 \:\df{Word-RAM} --- allows the ``C-language operators'', i.e., addition,
109 subtraction, multiplication, division, remainder, bitwise $\band$, $\bor$, exclusive
110 $\bor$ ($\bxor$) and negation ($\bnot$), and bitwise shifts ($\shl$ and~$\shr$).
111 \:\df{${\rm AC}^0$-RAM} --- allows all operations from the class ${\rm AC}^0$, i.e.,
112 those computable by constant-depth polynomial-size boolean circuits with unlimited
113 fan-in and fan-out. This includes all operations of the Word-RAM except for multiplication,
114 division and remainders, and also many other operations like computing the Hamming
115 weight (number of bits set in a~given number).
116 \:Both restrictions at once.
119 Thorup discusses the usual techniques employed by RAM algorithms in~\cite{thorup:aczero}
120 and he shows that they work on both Word-RAM and ${\rm AC}^0$-RAM, but the combination
121 of the two restrictions is too weak. On the other hand, the intersection of~${\rm AC}^0$
122 with the instruction set of modern processors is already strong enough (e.g., when we
123 add some floating-point operations and multimedia instructions available on the Intel's
124 Pentium~4~\cite{intel:pentium}).
126 We will therefore use the Word-RAM instruction set, mentioning differences from the
127 ${\rm AC}^0$-RAM where necessary.
130 When speaking of the \df{RAM,} we implicitly mean the version with numbers limited
131 by a~specified word size of $W$~bits, unit cost of operations and memory cells and the instruction
132 set of the Word-RAM. This corresponds to the usage in recent algorithmic literature,
133 although the authors rarely mention the details. In some cases, a~non-uniform variant
134 of the Word-RAM is considered as well (e.g., in~\cite{hagerup:dd}):
136 \defn\id{nonuniform}%
137 A~Word-RAM is called \df{weakly non-uniform,} if it is equipped with $\O(1)$-time
138 access to a~constant number of word-sized constants, which depend only on the word
139 size. These are called \df{native constants} and they are available in fixed memory
140 cells when the program starts. (By analogy with the high-level programming languages,
141 these constants can be thought of as computed at ``compile time.'')
144 The \df{Pointer Machine (PM)} also does not have any well established definition. The
145 various kinds of pointer machines are mapped by Ben-Amram in~\cite{benamram:pm},
146 but unlike the RAM's they turn out to be equivalent up to constant slowdown.
147 Our definition will be closely related to the \em{linking automaton} proposed
148 by Knuth in~\cite{knuth:fundalg}, we will only adapt it to use RAM-like
149 instructions instead of an~opaque control unit.
151 The PM works with two different types of data: \df{symbols} from a~finite alphabet
152 and \df{pointers}. The memory of the machine consists of a~fixed amount of \df{registers}
153 (some of them capable of storing a~single symbol, each of the others holds a~single pointer)
154 and an~arbitrary amount of \df{cells}. The structure of all cells is the same: each of them
155 again contains a~fixed number of fields for symbols and pointers. Registers can be addressed
156 directly, the cells only via pointers --- by using a~pointer stored either in a~register,
157 or in a~cell pointed to by a~register (longer chains of pointers cannot be followed in
160 We can therefore view the whole memory as a~directed graph, whose vertices
161 correspond to the cells (the registers are stored in a~single special cell).
162 The outgoing edges of each vertex correspond to pointer fields of the cells and they are
163 labelled with distinct labels drawn from a~finite set. In addition to that,
164 each vertex contains a~fixed amount of symbols. The program can directly access
165 vertices within distance~2 from the register vertex.
167 The program is a~finite sequence of instructions of the following kinds:
170 \:\df{symbol instructions,} which read a~pair of symbols, apply an~arbitrary
171 function on them and write the result to a~symbol register or field;
172 \:\df{pointer instructions} for assignment of pointers to pointer registers/fields
173 and for creation of new memory cells (a~pointer to the new cell is assigned
175 \:\df{control instructions} --- similarly to the RAM; conditional jumps can decide
176 on~arbitrary unary relations on symbols and compare pointers for equality.
179 Time and space complexity are defined in the straightforward way: all instructions
180 have unit cost and so do all memory cells.
182 Both input and output of the machine are passed in the form of a~linked structure
183 pointed to by a~designated register. For example, we can pass graphs back and forth
184 without having to encode them as strings of numbers or symbols. This is important,
185 because with the finite alphabet of the~PM, symbolic representations of graphs
186 generally require super-linear space and therefore also time.\foot{%
187 The usual representation of edges as pairs of vertex labels uses $\Theta(m\log n)$ bits
188 and as a~simple counting argument shows, this is asymptotically optimal for general
189 sparse graphs. On the other hand, specific families of sparse graphs can be stored
190 more efficiently, e.g., by a~remarkable result of Tur\'an~\cite{turan:succinct},
191 planar graphs can be encoded in~$\O(n)$ bits. Encoding of dense graphs is of
192 course trivial as the adjacency matrix has only~$\Theta(n^2)$ bits.}
195 Compared to the RAM, the PM lacks two important capabilities: indexing of arrays
196 and arithmetic instructions. We can emulate both with poly-logarithmic slowdown,
197 but it will turn out that they are rarely needed in graph algorithms. We are
198 also going to prove that the RAM is strictly stronger, so we will prefer to
199 formulate our algorithms in the PM model and use RAM only when necessary.
202 Every program for the Word-RAM with word size~$W$ can be translated to a~PM program
203 computing the same with $\O(W^2)$ slowdown (given a~suitable encoding of inputs and
204 outputs, of course). If the RAM program does not use multiplication, division
205 and remainder operations, $\O(W)$~slowdown is sufficient.
208 Represent the memory of the RAM by a~balanced binary search tree or by a~radix
209 trie of depth~$\O(W)$. Values are encoded as~linked lists of symbols pointed
210 to by the nodes of the tree. Both direct and indirect accesses to the memory
211 can therefore be done in~$\O(W)$ time. Use standard algorithms for arithmetic
212 on big numbers: $\O(W)$ per operation except for multiplication, division and
213 remainders which take $\O(W^2)$.\foot{We could use more efficient arithmetic
214 algorithms, but the quadratic bound is good enough for our purposes.}
217 \FIXME{Add references, especially to the unbounded parallelism remark.}
220 Every program for the PM running in polynomial time can be translated to a~program
221 computing the same on the Word-RAM with only $\O(1)$ slowdown.
224 Encode each cell of the PM's memory to $\O(1)$ integers. Store the encoded cells to
225 the memory of the RAM sequentially and use memory addresses as pointers. As the symbols
226 are finite and there is only a~polynomial number of cells allocated during execution
227 of the program, $\O(\log N)$-bit integers suffice ($N$~is the size of the program's input).
231 There are also \df{randomized} versions of both machines. These are equipped
232 with an~additional instruction for generating a~single random bit. The standard
233 techniques of design and analysis of randomized algorithms apply (see for
234 example Motwani and Raghavan~\cite{motwani:randalg}).
236 \FIXME{Consult sources. Does it make more sense to generate random words at once on the RAM?}
239 There is one more interesting machine: the \df{Immutable Pointer Machine} (see
240 the description of LISP machines in \cite{benamram:pm}). It differs from the
241 ordinary PM by the inability to modify existing memory cells. Only the contents
242 of the registers are allowed to change. All cell modifications thus have to
243 be performed by creating a~copy of the particular cell with some fields changed.
244 This in turn requires the pointers to the cell to be updated, possibly triggering
245 a~cascade of further cell copies. For example, when a~node of a~binary search tree is
246 updated, all nodes on the path from that node to the root have to be copied.
248 One of the advantages of this model is that the states of the machine are
249 persistent --- it is possible to return to a~previously visited state by recalling
250 the $\O(1)$ values of the registers (everything else could not have changed
251 since that time) and ``fork'' the computations. This corresponds to the semantics
252 of pure functional languages, e.g., Haskell~\cite{jones:haskell}.
254 Unless we are willing to accept a~logarithmic penalty in execution time and space
255 (in fact, our emulation of the Word-RAM on the PM can be easily made immutable),
256 the design of efficient algorithms for the immutable PM requires very different
257 techniques. Therefore, we will concentrate on the imperative models instead
258 and refer the interested reader to the thorough treatment of purely functional
259 data structures in the Okasaki's monograph~\cite{okasaki:funcds}.
261 %--------------------------------------------------------------------------------
263 \section{Bucket sorting and contractions}\id{bucketsort}%
265 The Contractive Bor\o{u}vka's algorithm (\ref{contbor}) needed to contract a~given
266 set of edges in the current graph and flatten it afterwards, all this in time $\O(m)$.
267 We have spared the technical details for this section and they will be useful
268 in further algorithms, too.
270 As already suggested, the contractions can be performed by building an~auxiliary
271 graph and finding its connected components. Thus we will take care of the flattening
275 On the RAM, it is sufficient to sort the edges lexicographically (each edge viewed
276 as an~ordered pair of vertex identifiers with the smaller of the identifiers placed
277 first). We can do that by a two-pass bucket sort with~$n$ buckets corresponding
278 to the vertex identifiers.
280 However, there is a~catch in this. Suppose that we use the standard representation
281 of graphs by adjacency lists whose heads are stored in an array indexed by vertex
282 identifiers. When we contract and flatten the graph, the number of vertices decreases,
283 but if we inherit the original vertex identifiers, the arrays will still have the
284 same size. Hence we spend a~super-linear amount of time on scanning the increasingly
285 sparse arrays, most of the time skipping unused entries.
287 To avoid this, we have to renumber the vertices after each contraction to component
288 identifiers from the auxiliary graph and we create a~new vertex array. This way,
289 the representation of the graph will be kept linear with respect to the size of the
293 The pointer representation of graphs does not suffer from sparsity as the vertices
294 are always identified by pointers to per-vertex structures. Each such structure
295 then contains all attributes associated with the vertex, including the head of its
296 adjacency list. However, we have to find a~way how to perform bucket sorting
299 We will keep a~list of the per-vertex structures which defines the order of~vertices.
300 Each such structure will contain a~pointer to the head of the corresponding bucket,
301 again stored as a~list. Putting an~edge to a~bucket can be done in constant time then,
302 scanning all~$n$ buckets takes $\O(n+m)$ time.
304 \FIXME{Add an example of locally determined orders, e.g., tree isomorphism?}
306 %--------------------------------------------------------------------------------
308 \section{Data structures on the RAM}
310 There is a~lot of data structures designed specifically for the RAM, taking
311 advantage of both indexing and arithmetics. In many cases, they surpass the known
312 lower bounds for the same problem on the~PM and they often achieve constant time
313 per operation, at least when either the magnitude of the values or the size of
314 the data structure are suitably bounded.
316 A~classical result of this type are the trees of van Emde Boas~\cite{boas:vebt},
317 which represent a~subset of the integers $\{0,\ldots,U-1\}$, allowing insertion,
318 deletion and order operations (minimum, maximum, successor etc.) in time $\O(\log\log U)$,
319 regardless of the size of the subset. If we replace the heap used in the Jarn\'\i{}k's
320 algorithm (\ref{jarnik}) by this structure, we immediately get an~algorithm
321 for finding the MST in integer-weighted graphs in time $\O(m\log\log w_{max})$,
322 where $w_{max}$ is the maximum weight. We will show later that it is even
323 possible to achieve linear time complexity for arbitrary integer weights.
325 A~real breakthrough has been made by Fredman and Willard, who introduced
326 the Fusion trees~\cite{fw:fusion} which again perform membership and predecessor
327 operation on a~set of $n$~integers, but this time with complexity $\O(\log_W n)$
328 per operation on a~Word-RAM with $W$-bit words. This of course assumes that
329 each element of the set fits in a~single word. As $W$ must at least~$\log n$,
330 the operations take $\O(\log n/\log\log n)$ and we are able to sort $n$~integers
331 in time~$o(n\log n)$. This was a~beginning of a~long sequence of faster and
332 faster sorting algorithms, culminating with the work by Thorup and Han.
333 They have improved the time complexity of integer sorting to $\O(n\log\log n)$ deterministically~\cite{han:detsort}
334 and expected $\O(n\sqrt{\log\log n})$ for randomized algorithms~\cite{hanthor:randsort},
335 both in linear space.
337 Despite the recent progress, the corner-stone of most RAM data structures
338 is still the representation of data structures by integers introduced by Fredman
339 and Willard. It will also form a~basis for the rest of this chapter.
341 \FIXME{Add more history.}
343 %--------------------------------------------------------------------------------
345 \section{Bits and vectors}
347 In this rather technical section, we will show how the RAM can be used as a~vector
348 computer to operate in parallel on multiple elements, as long as these elements
349 fit in a~single machine word. At the first sight this might seem useless, because we
350 cannot require large word sizes, but surprisingly often the elements are small
351 enough relative to the size of the algorithm's input and thus also relative to
352 the minimum possible word size. Also, as the following lemma shows, we can
353 easily emulate slightly longer words:
355 \lemman{Multiple-precision calculations}
356 Given a~RAM with $W$-bit words, we can emulate all calculation and control
357 instructions of a~RAM with word size $kW$ in time depending only on the~$k$.
358 (This is usually called \df{multiple-precision arithmetics.})
361 We split each word of the ``big'' machine to $W'$-bit blocks, where $W'=W/2$, and store these
362 blocks in $2k$ consecutive memory cells. Addition, subtraction, comparison and
363 bitwise logical operations can be performed block-by-block. Shifts by a~multiple
364 of~$W'$ are trivial, otherwise we can combine each block of the result from
365 shifted versions of two original blocks.
366 To multiply two numbers, we can use the elementary school algorithm using the $W'$-bit
367 blocks as digits in base $2^{W'}$ --- the product of any two blocks fits
370 Division is harder, but Newton-Raphson iteration (see~\cite{ito:newrap})
371 converges to the quotient in a~constant number of iterations, each of them
372 involving $\O(1)$ multiple-precision additions and multiplications. A~good
373 starting approximation can be obtained by dividing the two most-significant
374 (non-zero) blocks of both numbers.
376 Another approach to division is using the improved elementary school algorithm as described
377 by Knuth in~\cite{knuth:seminalg}. It uses $\O(k^2)$ steps, but the steps involve
378 calculation of the most significant bit set in a~word. We will show below that it
379 can be done in constant time, but we have to be careful to avoid division instructions.
382 \notan{Bit strings}\id{bitnota}%
383 We will work with binary representations of natural numbers by strings over the
384 alphabet $\{\0,\1\}$: we will use $\(x)$ for the number~$x$ written in binary,
385 $\(x)_b$ for the same padded to exactly $b$ bits by adding leading zeroes,
386 and $x[k]$ for the value of the $k$-th bit of~$x$ (with a~numbering of bits such that $2^k[k]=1$).
387 The usual conventions for operations on strings will be utilized: When $s$
388 and~$t$ are strings, we write $st$ for their concatenation and
389 $s^k$ for the string~$s$ repeated $k$~times.
390 When the meaning is clear from the context,
391 we will use $x$ and $\(x)$ interchangeably to avoid outbreak of symbols.
394 The \df{bitwise encoding} of a~vector ${\bf x}=(x_0,\ldots,x_{d-1})$ of~$b$-bit numbers
395 is an~integer~$x$ such that $\(x)=\(x_{d-1})_b\0\(x_{d-2})_b\0\ldots\0\(x_0)_b$, i.e.,
396 $x = \sum_i 2^{(b+1)i}\cdot x_i$. (We have interspersed the elements with \df{separator bits.})
398 \notan{Vectors}\id{vecnota}%
399 We will use boldface letters for vectors and the same letters in normal type
400 for their encodings. The elements of a~vector~${\bf x}$ will be written as
401 $x_0,\ldots,x_{d-1}$.
404 If we want to fit the whole vector in a~single word, the parameters $b$ and~$d$ must satisty
405 the condition $(b+1)d\le W$.
406 By using multiple-precision arithmetics, we can encode all vectors satisfying $bd=\O(W)$.
407 We will now describe how to translate simple vector manipulations to sequences of $\O(1)$ RAM operations
408 on their codes. As we are interested in asymptotic complexity only, we prefer clarity
409 of the algorithms over saving instructions. Among other things, we freely use calculations
410 on words of size $\O(bd)$, assuming that the Multiple-precision lemma comes to save us
414 First of all, let us observe that we can use $\band$ and $\bor$ with suitable constants
415 to write zeroes or ones to an~arbitrary set of bit positions at once. These operations
416 are usually called \df{bit masking}. Also, any element of a~vector can be extracted or
417 replaced by a~different value in $\O(1)$ time by masking and shifts.
420 \def\setslot#1{\setbox0=#1\slotwd=\wd0}
421 \def\slot#1{\hbox to \slotwd{\hfil #1\hfil}}
422 \def\[#1]{\slot{$#1$}}
423 \def\9{\rack{\0}{\hss$\cdot$\hss}}
427 \halign{\hskip 0.15\hsize\hfil $ ##$&\hbox to 0.6\hsize{${}##$ \hss}\cr
432 \algn{Operations on vectors with $d$~elements of $b$~bits each}
436 \:$\<Replicate>(\alpha)$ --- creates a~vector $(\alpha,\ldots,\alpha)$:
438 \alik{\<Replicate>(\alpha)=\alpha\cdot(\0^b\1)^d. \cr}
440 \:$\<Sum>(x)$ --- calculates the sum of the elements of~${\bf x}$, assuming that
441 the result fits in $b$~bits:
443 \alik{\<Sum>(x) = x \bmod \1^{b+1}. \cr}
445 This is correct because when we calculate modulo~$\1^{b+1}$, the number $2^{b+1}=\1\0^{b+1}$
446 is congruent to~1 and thus $x = \sum_i 2^{(b+1)i}\cdot x_i \equiv \sum_i 1^i\cdot x_i \equiv \sum_i x_i$.
447 As the result should fit in $b$~bits, the modulo makes no difference.
449 If we want to avoid division, we can use double-precision multiplication instead:
451 \setslot{\hbox{~$\0x_{d-1}$}}
454 \def\dd{\slot{$\cdots$}}
455 \def\vd{\slot{$\vdots$}}
456 \def\rule{\noalign{\medskip\nointerlineskip}$\hrulefill$\cr\noalign{\nointerlineskip\medskip}}
459 \[\0x_{d-1}] \dd \[\0x_2] \[\0x_1] \[\0x_0] \cr
460 *~~ \z \dd \z\z\z \cr
462 \[x_{d-1}] \dd \[x_2] \[x_1] \[x_0] \cr
463 \[x_{d-1}] \[x_{d-2}] \dd \[x_1] \[x_0] \. \cr
464 \[x_{d-1}] \[x_{d-2}] \[x_{d-3}] \dd \[x_0] \. \. \cr
465 \vd\vd\vd\vd\.\.\.\cr
466 \[x_{d-1}] \dd \[x_2]\[x_1]\[x_0] \. \. \. \. \cr
468 \[r_{d-1}] \dd \[r_2] \[r_1] \[s_d] \dd \[s_3] \[s_2] \[s_1] \cr
471 This way, we even get the vector of all partial sums:
472 $s_k=\sum_{i=0}^{k-1}x_i$, $r_k=\sum_{i=k}^{d-1}x_i$.
474 \:$\<Cmp>(x,y)$ --- element-wise comparison of~vectors ${\bf x}$ and~${\bf y}$,
475 i.e., a~vector ${\bf z}$ such that $z_i=1$ if $x_i<y_i$ and $z_i=0$ otherwise.
477 We replace the separator zeroes in~$x$ by ones and subtract~$y$. These ones
478 change back to zeroes exactly at the positions where $x_i<y_i$ and they stop
479 carries from propagating, so the fields do not interact with each other:
481 \setslot{\vbox{\hbox{~$x_{d-1}$}\hbox{~$y_{d-1}$}}}
482 \def\9{\rack{\0}{\hss ?\hss}}
484 \1 \[x_{d-1}] \1 \[x_{d-2}] \[\cdots] \1 \[x_1] \1 \[x_0] \cr
485 -~ \0 \[y_{d-1}] \0 \[y_{d-2}] \[\cdots] \0 \[y_1] \0 \[y_0] \cr
487 \9 \[\ldots] \9 \[\ldots] \[\cdots] \9 \[\ldots] \9 \[\ldots] \cr
490 It only remains to shift the separator bits to the right positions, negate them
491 and mask out all other bits.
493 \:$\<Rank>(x,\alpha)$ --- returns the number of elements of~${\bf x}$ which are less than~$\alpha$,
494 assuming that the result fits in~$b$ bits:
497 \<Rank>(x,\alpha) = \<Sum>(\<Cmp>(x,\<Replicate>(\alpha))). \cr
500 \:$\<Insert>(x,\alpha)$ --- inserts~$\alpha$ into a~sorted vector $\bf x$:
502 We calculate $k = \<Rank>(x,\alpha)$ first, then insert~$\alpha$ as the $k$-th
503 field of~$\bf x$ using masking operations and shifts.
505 \:$\<Unpack>(\alpha)$ --- creates a~vector whose elements are the bits of~$\(\alpha)_d$.
506 In other words, inserts blocks~$\0^b$ between the bits of~$\alpha$. Assuming that $b\ge d$,
507 we can do it as follows:
510 \:$x\=\<Replicate>(\alpha)$.
511 \:$y\=(2^{b-1},2^{b-2},\ldots,2^0)$. \cmt{bitwise encoding of this vector}
513 \:Return $\<Cmp>(z,y)$.
516 Let us observe that $z_i$ is either zero or equal to~$y_i$ depending on the value
517 of the $i$-th bit of the number~$\alpha$. Comparing it with~$y_i$ normalizes it
518 to either zero or one.
520 \:$\<Unpack>_\pi(\alpha)$ --- like \<Unpack>, but changes the order of the
521 bits according to a~fixed permutation~$\pi$: The $i$-th element of the
522 resulting vector is equal to~$\alpha[\pi(i)]$.
524 Implemented as above, but with a~mask $y=(2^{\pi(b-1)},\ldots,2^{\pi(0)})$.
526 \:$\<Pack>(x)$ --- the inverse of \<Unpack>: given a~vector of zeroes and ones,
527 it produces a~number whose bits are the elements of the vector (in other words,
528 it crosses out the $\0^b$ blocks).
530 We interpret the~$x$ as an~encoding of a~vector with elements one bit shorter
531 and we sum these elements. For example, when $n=4$ and~$b=4$:
533 \setslot{\hbox{$x_3$}}
535 \def\|{\hskip1pt\vrule height 10pt depth 4pt\hskip1pt}
536 \def\.{\hphantom{\|}}
539 \|\z\.\z\.\z\.\z\.\[x_3]\|\z\.\z\.\z\.\z\.\[x_2]\|\z\.\z\.\z\.\z\[x_1]\|\z\.\z\.\z\.\z\.\[x_0]\|\cr
540 \|\z\.\z\.\z\.\z\|\[x_3]\.\z\.\z\.\z\|\z\.\[x_2]\.\z\.\z\|\z\.\z\[x_1]\.\z\|\z\.\z\.\z\.\[x_0]\|\cr
543 However, this ``reformatting'' does not produce a~correct encoding of a~vector,
544 because the separator zeroes are missing. For this reason, the implementation
545 of~\<Sum> using modulo does not work correctly (it produces $\0^b$ instead of $\1^b$).
546 We therefore use the technique based on multiplication instead, which does not need
547 the separators. (Alternatively, we can observe that $\1^b$ is the only case
548 affected, so we can handle it separately.)
553 We can use the aforementioned tricks to perform interesting operations on individual
554 numbers in constant time, too. Let us assume for a~while that we are
555 operating on $b$-bit numbers and the word size is at least~$b^2$.
556 This enables us to make use of intermediate vectors with $b$~elements
559 \algn{Integer operations in quadratic workspace}\id{lsbmsb}
563 \:$\<Weight>(\alpha)$ --- computes the Hamming weight of~$\alpha$, i.e., the number of ones in~$\(\alpha)$.
565 We perform \<Unpack> and then \<Sum>.
567 \:$\<Permute>_\pi(\alpha)$ --- shuffles the bits of~$\alpha$ according
568 to a~fixed permutation~$\pi$.
570 We perform $\<Unpack>_\pi$ and \<Pack> back.
572 \:$\<LSB>(\alpha)$ --- finds the least significant bit of~$\alpha$,
573 i.e., the smallest~$i$ such that $\alpha[i]=1$.
575 By a~combination of subtraction with $\bxor$, we create a~number
576 which contains ones exactly at the position of $\<LSB>(\alpha)$ and below:
579 \alpha&= \9\9\9\9\9\1\0\0\0\0\cr
580 \alpha-1&= \9\9\9\9\9\0\1\1\1\1\cr
581 \alpha\bxor(\alpha-1)&= \0\9\9\9\0\1\1\1\1\1\cr
584 Then we calculate the \<Weight> of the result and subtract~1.
586 \:$\<MSB>(\alpha)$ --- finds the most significant bit of~$\alpha$ (the position
587 of the highest bit set).
589 Reverse the bits of the number~$\alpha$ first by calling \<Permute>, then apply \<LSB>
590 and subtract the result from~$b-1$.
595 As noted by Brodnik~\cite{brodnik:lsb} and others, the space requirements of
596 the \<LSB> operation can be reduced to linear. We split the input to $\sqrt{b}$
597 blocks of $\sqrt{b}$ bits each. Then we determine which blocks are non-zero and
598 identify the lowest such block (this is a~\<LSB> of a~number whose bits
599 correspond to the blocks). Finally we calculate the \<LSB> of this block. In
600 both calls to \<LSB,> we have a $\sqrt{b}$-bit number in a~$b$-bit word, so we
601 can use the previous algorithm. The same trick of course works for finding the
604 The following algorithm shows the details.
606 \algn{LSB in linear workspace}
609 \algin A~$w$-bit number~$\alpha$.
610 \:$b\=\lceil\sqrt{w}\,\rceil$. \cmt{size of a~block}
611 \:$\ell\=b$. \cmt{the number of blocks is the same}
612 \:$x\=(\alpha \band (\0\1^b)^\ell) \bor (\alpha \band (\1\0^b)^\ell)$.
614 \cmt{encoding of a~vector~${\bf x}$ such that $x_i\ne 0$ iff the $i$-th block is non-zero}%
615 \foot{Why is this so complicated? It is tempting to take $\alpha$ itself as a~code of this vector,
616 but we unfortunately need the separator bits between elements, so we create them and
617 relocate the bits we have overwritten.}
618 \:$y\=\<Cmp>(0,x)$. \cmt{$y_i=1$ if the $i$-th block is non-zero, otherwise $y_0=0$}
619 \:$\beta\=\<Pack>(y)$. \cmt{each block compressed to a~single bit}
620 \:$p\=\<LSB>(\beta)$. \cmt{the index of the lowest non-zero block}
621 \:$\gamma\=(\alpha \shr bp) \band \1^b$. \cmt{the contents of that block}
622 \:$q\=\<LSB>(\gamma)$. \cmt{the lowest bit set there}
623 \algout $\<LSB>(\alpha) = bp+q$.
627 We have used a~plenty of constants which depend on the format of the vectors.
628 Either we can write non-uniform programs (see \ref{nonuniform}) and use native constants,
629 or we can observe that all such constants can be easily manufactured. For example,
630 $(\0^b\1)^d = \1^{bd} / \1^{b+1} = (2^{bd}-1)/(2^{b+1}-1)$. The only exceptions
631 are the~$w$ and~$b$ in the LSB algorithm \ref{lsb}, which we are unable to produce