From bb8e054f3af247b5480a48a18c97723a364b89b7 Mon Sep 17 00:00:00 2001 From: Martin Mares Date: Tue, 4 Mar 2008 16:44:26 +0100 Subject: [PATCH] Q-heap trees and MST with integer weights. --- adv.tex | 24 +++++++++++++++++++++--- macros.tex | 1 + ram.tex | 39 ++++++++++++++++++++++++++++++++++++--- 3 files changed, 58 insertions(+), 6 deletions(-) diff --git a/adv.tex b/adv.tex index 2941a98..f50821d 100644 --- a/adv.tex +++ b/adv.tex @@ -507,8 +507,28 @@ at least~$\log^{(k)} n$ for some $k\in{\bb N}^+$, it runs in time~$\O(km)$. If $m/n \ge \log^{(k)} n$, then $\beta(m,n)\le k$. \qed +\obs +The algorithm spends most of the time in phases which have small heaps. Once the +heap grows to $\Omega(\log^{(k)} n)$ for any fixed~$k$, the graph gets dense enough +to guarantee that at most~$k$ phases remain. This means that if we are able to +construct a~heap of size $\Omega(\log^{(k)} n)$ with constant time per operation, +we can get a~linear-time algorithm for MST. This is the case when the weights are +integers: + +\thmn{MST for graphs with integer weights, Fredman and Willard \cite{fw:transdich}} +MST of a~graph with integer edge weights can be found in time $\O(m)$ on the Word-RAM. + +\proof +We will combine the Iterated Jarn\'\i{}k's algorithm with the Q-heaps from section \ref{qheaps}. +We modify the first pass of the algorithm to choose $t=\log n$ and use the Q-heap tree instead +of the Fibonacci heap. From Theorem \ref{qh} and Remark \ref{qhtreerem} we know that the +operations on the Q-heap tree run in constant time, so the whole first phase takes time~$\O(m)$. +Following the analysis of the original algorithm in the proof of Theorem \ref{itjarthm} we obtain +$t_2\ge 2^{t_1} = 2^{\log n} = n$, so the algorithm stops after the second phase. +\qed + \rem -Gabow et al.~\cite{gabow:mst} have shown how to speed this algorithm up to~$\O(m\log\beta(m,n))$. +Gabow et al.~\cite{gabow:mst} have shown how to speed up the Iterated Jarn\'\i{}k's algorithm to~$\O(m\log\beta(m,n))$. They split the adjacency lists of the vertices to small buckets, keep each bucket sorted and consider only the lightest edge in each bucket until it is removed. The mechanics of the algorithm is complex and there is a~lot of technical details @@ -516,8 +536,6 @@ which need careful handling, so we omit the description of this algorithm. \FIXME{Reference to Chazelle.} -\FIXME{Reference to Q-Heaps.} - %-------------------------------------------------------------------------------- %\section{Verification of minimality} diff --git a/macros.tex b/macros.tex index 3e0ba4e..79998b1 100644 --- a/macros.tex +++ b/macros.tex @@ -351,6 +351,7 @@ \def\thmn{\thm\labelx} \def\lemman{\lemma\labelx} \def\defnn{\defn\labelx} +\def\corn{\cor\labelx} \def\algn{\alg\label} \def\notan{\nota\labelx} \def\examplen{\example\labelx} diff --git a/ram.tex b/ram.tex index a261545..9e0fcc6 100644 --- a/ram.tex +++ b/ram.tex @@ -936,7 +936,7 @@ per Q-heap operation. As Fredman and Willard have shown, it is possible to maintain a~``decoder'', whose state is stored in $\O(1)$ machine words, and which helps us to extract $x[B]$ in a~constant number of operations: -\lemman{Extraction of bits, Fredman and Willard \cite{fw:transdich}}\id{qhxtract}% +\lemman{Extraction of bits}\id{qhxtract}% Under the assumptions on~$k$, $W$ and the preprocessing time as in the Q-heaps, it is possible to maintain a~data structure for a~set~$B$ of bit positions, which allows~$x[B]$ to be extracted in $\O(1)$ time for an~arbitrary~$x$. @@ -949,7 +949,8 @@ See Fredman and Willard \cite{fw:transdich}. \para This was the last missing bit of the mechanics of the Q-heaps. We are -therefore ready to conclude this section by the following theorem: +therefore ready to conclude this section by the following theorem +and its consequences: \thmn{Q-heaps, Fredman and Willard \cite{fw:transdich}}\id{qh}% Let $W$ and~$k$ be positive integers such that $k=\O(W^{1/4})$. Let~$Q$ @@ -957,7 +958,7 @@ be a~Q-heap of at most $k$-elements of $W$~bits each. Then the Q-heap operations \ref{qhfirst} to \ref{qhlast} on~$Q$ (insertion, deletion, search for a~given value and search for the $i$-th smallest element) run in constant time on a~Word-RAM with word size~$W$, after spending -time $\O(2^{k^4})$ on the same RAM by precomputing of tables. +time $\O(2^{k^4})$ on the same RAM on precomputing of tables. \proof Every operation on the Q-heap can be performed in a~constant number of @@ -967,4 +968,36 @@ logarithms and bit extraction. All these can be calculated in constant time using the results of section \ref{bitsect} and Lemma \ref{qhxtract}. \qed +\rem +We can also use the Q-heaps as building blocks of more complex structures +like Atomic heaps and AF-heaps (see once again \cite{fw:transdich}). We will +show a~simpler, but useful construction, sometimes called the \df{Q-heap tree.} +Suppose we have a~Q-heap of capacity~$k$ and a~parameter $d\in{\bb N}^+$. We +can build a~balanced $k$-ary tree of depth~$d$ such that its leaves contain +a~given set and every internal vertex keeps the minimum value in the subtree +rooted in it, together with a~Q-heap containing the values in all its sons. +This allows minimum to be extracted in constant time (it is placed in the root) +and when any element is changed, it is sufficient to recalculate the values +from the path from this element to the root, which takes $\O(d)$ Q-heap +operations. + +\corn{Q-heap trees}\id{qhtree}% +For every positive integer~$r$ and $\delta>0$ there exists a~data structure +capable of maintaining the minimum of a~set of at most~$r$ word-sized numbers +under insertions and deletions. Each operation takes $\O(1)$ time on a~Word-RAM +with word size at least $\O(r^{\delta})$, after spending time +$\O(2^{r^\delta})$ on precomputing of tables. + +\proof +Choose $\delta' \le \delta$ such that $r^{\delta'} = \O(W^{1/4})$, where $W$ +is the word size of the RAM. Build a~Q-heap tree of depth $d=\lceil \delta/\delta'\rceil$ +containing Q-heaps of size $k=r^{\delta'}$. +\qed + +\rem\id{qhtreerem}% +When we have an~algorithm with input of size~$N$, the word size is at least~$\log N$ +and we can spend time $\O(N)$ on preprocessing, so we can choose $r=\log N$ and +$\delta=1$ in the above corollary and get a~heap of size $\log N$ working in +constant time per operation. + \endpart -- 2.39.2