From: Martin Mares Date: Wed, 5 Mar 2008 13:14:05 +0000 (+0100) Subject: Fixes to the QH. X-Git-Tag: printed~193 X-Git-Url: http://mj.ucw.cz/gitweb/?a=commitdiff_plain;h=1c4ce70d3e9ed4312f5f1d22658eafdea82d4d98;p=saga.git Fixes to the QH. --- diff --git a/adv.tex b/adv.tex index 7ae951d..77ac939 100644 --- a/adv.tex +++ b/adv.tex @@ -525,7 +525,9 @@ We modify the first pass of the algorithm to choose $t=\log n$ and use the Q-hea of the Fibonacci heap. From Theorem \ref{qh} and Remark \ref{qhtreerem} we know that the operations on the Q-heap tree run in constant time, so the modified first phase takes time~$\O(m)$. Following the analysis of the original algorithm in the proof of Theorem \ref{itjarthm} we obtain -$t_2\ge 2^{t_1} = 2^{\log n} = n$, so the algorithm stops after the second phase. +$t_2\ge 2^{t_1} = 2^{\log n} = n$, so the algorithm stops after the second phase.\foot{% +Alternatively, we can use the Q-heaps directly with $k=\log^{1/4}n$ and then stop +after the third phase.} \qed \rem diff --git a/ram.tex b/ram.tex index 9e0fcc6..1f8705e 100644 --- a/ram.tex +++ b/ram.tex @@ -650,7 +650,7 @@ in constant time, but so far only under the assumption that the number of these values is small enough and that the values themselves are also small enough (so that the whole set fits in $\O(1)$ machine words). Now we will show how to lift the restriction on the magnitude of the values and still keep constant time -complexity. We will describe a~simplified version of the Q-heaps developed by +complexity. We will describe a~slightly simplified version of the Q-heaps developed by Fredman and Willard in~\cite{fw:transdich}. The Q-heap represents a~set of at most~$k$ word-sized integers, where $k\le W^{1/4}$ @@ -659,20 +659,20 @@ of minimum and maximum, and other operations described below, in constant time, we are willing to spend~$\O(2^{k^4})$ time on preprocessing. The exponential-time preprocessing may sound alarming, but a~typical application uses -Q-heaps of size $k=\log^{1/4} N$, where $N$ is the size of the algorithm's input, -which guarantees that $k\le W^{1/4}$ and $\O(2^{k^4}) = \O(N)$. Let us however +Q-heaps of size $k=\log^{1/4} N$, where $N$ is the size of the algorithm's input. +This guarantees that $k\le W^{1/4}$ and $\O(2^{k^4}) = \O(N)$. Let us however remark that the whole construction is primarily of theoretical importance and that the huge constants involved everywhere make these heaps useless -for practical algorithms. However, many of the tricks used prove themselves +in practical algorithms. Many of the tricks used however prove themselves useful even in real-life implementations. -Preprocessing makes it possible to precompute tables for almost arbitrary functions -and then assume that they can be evaluated in constant time: +Spending the time on reprocessing makes it possible to precompute tables for +almost arbitrary functions and then assume that they can be evaluated in +constant time: \lemma\id{qhprecomp}% When~$f$ is a~function computable in polynomial time, $\O(2^{k^4})$ time is enough -to precompute a~table of the values of~$f$ for the values of its arguments whose total -bit size is $\O(k^3)$. +to precompute a~table of the values of~$f$ for all arguments whose size is $\O(k^3)$ bits. \proof There are $2^{\O(k^3)}$ possible combinations of arguments of the given size and for each of @@ -682,18 +682,18 @@ to observe that $2^{\O(k^3)}\cdot \O(k^c) = \O(2^{k^4})$. \para We will first show an~auxiliary construction based on tries and then derive -the actual definition of the Q-heap from it. +the real definition of the Q-heap from it. \nota Let us introduce some notation first: \itemize\ibull \:$W$ --- the word size of the RAM, \:$k = \O(W^{1/4})$ --- the limit on the size of the heap, -\:$n\le k$ --- the number of elements in the represented set, +\:$n\le k$ --- the number of elements stored in the heap, \:$X=\{x_1, \ldots, x_n\}$ --- the elements themselves: distinct $W$-bit numbers indexed in a~way that $x_1 < \ldots < x_n$, -\:$g_i = \(x_i \bxor x_{i+1})$ --- the most significant bit of those in which $x_i$ and~$x_{i+1}$ differ, -\:$R_X(x)$ --- the rank of~$x$ in~$X$, that is the number of elements of~$X$, which are less than~$x$ +\:$g_i = \(x_i \bxor x_{i+1})$ --- the position of the most significant bit in which $x_i$ and~$x_{i+1}$ differ, +\:$R_X(x)$ --- the rank of~$x$ in~$X$, that is the number of elements of~$X$ which are less than~$x$ (where $x$~itself need not be an~element of~$X$).\foot{We will dedicate the whole chapter \ref{rankchap} to the study of various ranks.} \endlist @@ -702,22 +702,24 @@ study of various ranks.} A~\df{trie} for a~set of strings~$S$ over a~finite alphabet~$\Sigma$ is a~rooted tree whose vertices are the prefixes of the strings in~$S$ and there is an~edge going from a~prefix~$\alpha$ to a~prefix~$\beta$ iff $\beta$ can be -obtained from~$\alpha$ by adding a~single symbol of the alphabet. The edge -will be labeled with the particular symbol. We will also define a~\df{letter depth} -of a~vertex to be the length of the corresponding prefix and mark the vertices +obtained from~$\alpha$ by appending a~single symbol of the alphabet. The edge +will be labeled with the particular symbol. We will also define the~\df{letter depth} +of a~vertex as the length of the corresponding prefix. We mark the vertices which match a~string of~$S$. -A~\df{compressed trie} is obtained from the trie by removing the vertices of outdegree~1. -Whereever is a~directed path whose internal vertices have outdegree~1, we replace this -path by a~single edge labeled with the contatenation of the original edge's labels. +A~\df{compressed trie} is obtained from the trie by removing the vertices of outdegree~1 +except for the root and marked vertices. +Whereever is a~directed path whose internal vertices have outdegree~1 and they carry +no mark, we replace this path by a~single edge labeled with the contatenation +of the original edge's labels. -In both kinds of tries, we will order the outgoing edges of every vertex by their labels +In both kinds of tries, we order the outgoing edges of every vertex by their labels lexicographically. \obs -In both tries, the root of the tree is the empty word and for every vertex, the +In both tries, the root of the tree is the empty word and for every other vertex, the corresponding prefix is equal to the concatenation of edge labels on the path -leading from the root to the vertex. The letter depth of the vertex is equal to +leading from the root to that vertex. The letter depth of the vertex is equal to the total size of these labels. All leaves correspond to strings in~$S$, but so can some internal vertices if there are two strings in~$S$ such that one is a~prefix of the other. @@ -727,7 +729,7 @@ distinct and when we compress the trie, no two such labels have share their init symbols. This allows us to search in the trie efficiently: when looking for a~string~$x$, we follow the path from the root and whenever we visit an~internal vertex of letter depth~$d$, we test the $d$-th character of~$x$, -follow the edge whose label starts with this character, and check that the +we follow the edge whose label starts with this character, and we check that the rest of the label matches. The compressed trie is also efficient in terms of space consumption --- it has @@ -736,7 +738,7 @@ and all edge labels can be represented in space linear in the sum of the lengths of the strings in~$S$. \defn -For our set~$X$, we will define~$T$ as a~compressed trie for the set of binary +For our set~$X$, we define~$T$ as a~compressed trie for the set of binary encodings of the numbers~$x_i$, padded to exactly $W$~bits, i.e., for $S = \{ \(x)_W ; x\in X \}$. \obs @@ -750,24 +752,24 @@ is exactly~$W-1-g_i$. \para Let us now modify the algorithm for searching in the trie and make it compare only the first symbols of the edges. In other words, we will test only the bits~$g_i$ -which will be called \df{guides,} as they guide us through the tree. For $x\in +which will be called \df{guides} (as they guide us through the tree). For $x\in X$, the modified algorithm will still return the correct leaf. For all~$x$ outside~$X$ it will no longer fail and instead it will land on some leaf~$x_i$. At the -first sight this vertex may seem unrelated, but we will show that it can be +first sight the number~$x_i$ may seem unrelated, but we will show that it can be used to determine the rank of~$x$ in~$X$, which will later form a~basis for all -other Q-heap operations: +Q-heap operations: \lemma\id{qhdeterm}% The rank $R_X(x)$ is uniquely determined by a~combination of: \itemize\ibull \:the trie~$T$, -\:the index~$i$ of the leaf visited when searching for~$x$ in~$T$, +\:the index~$i$ of the leaf found when searching for~$x$ in~$T$, \:the relation ($<$, $=$, $>$) between $x$ and $x_i$, \:the bit position $b=\(x\bxor x_i)$ of the first disagreement between~$x$ and~$x_i$. \endlist \proof -If $x\in X$, we detect that from $x_i=x$ and the rank is obviously~$i$ itself. +If $x\in X$, we detect that from $x_i=x$ and the rank is obviously~$i-1$. Let us assume that $x\not\in X$ and imagine that we follow the same path as when searching for~$x$, but this time we check the full edge labels. The position~$b$ is the first position @@ -775,14 +777,14 @@ where~$\(x)$ disagrees with a~label. Before this point, all edges not taken by the search were leading either to subtrees containing elements all smaller than~$x$ or all larger than~$x$ and the only values not known yet are those in the subtree below the edge which we currently consider. Now if $x[b]=0$ (and therefore $x0$ there exists a~data structure capable of maintaining the minimum of a~set of at most~$r$ word-sized numbers under insertions and deletions. Each operation takes $\O(1)$ time on a~Word-RAM -with word size at least $\O(r^{\delta})$, after spending time +with word size $W=\Omega(r^{\delta})$, after spending time $\O(2^{r^\delta})$ on precomputing of tables. \proof -Choose $\delta' \le \delta$ such that $r^{\delta'} = \O(W^{1/4})$, where $W$ -is the word size of the RAM. Build a~Q-heap tree of depth $d=\lceil \delta/\delta'\rceil$ -containing Q-heaps of size $k=r^{\delta'}$. -\qed +Choose $\delta' \le \delta$ such that $r^{\delta'} = \O(W^{1/4})$. Build +a~Q-heap tree of depth $d=\lceil \delta/\delta'\rceil$ containing Q-heaps of +size $k=r^{\delta'}$. \qed \rem\id{qhtreerem}% When we have an~algorithm with input of size~$N$, the word size is at least~$\log N$