From: Martin Mares Date: Sun, 20 Apr 2008 16:41:48 +0000 (+0200) Subject: Added mention of red and blue rules to the intro of opt. X-Git-Tag: printed~63 X-Git-Url: http://mj.ucw.cz/gitweb/?a=commitdiff_plain;ds=sidebyside;h=1e0055d9c1ccd9dccba79f5bb64f4f558efc74e1;p=saga.git Added mention of red and blue rules to the intro of opt. --- diff --git a/adv.tex b/adv.tex index cda98f9..dc4e70f 100644 --- a/adv.tex +++ b/adv.tex @@ -1228,14 +1228,14 @@ then $\sum_r n_r + m_r \le \sum_v 3n_v = \O(n)$. After adding the exceptional pa from the root, we get $\O(m+n)=\O(m)$. \qed -\rem +\paran{High probability}% There is also a~high-probability version of the above theorem. According to Karger, Klein and Tarjan \cite{karger:randomized}, the time complexity of the algorithm is $\O(m)$ with probability $1-\exp(-\Omega(m))$. The proof again follows the recursion tree and it involves applying the Chernoff bound \cite{chernoff} to bound the tail probabilities. -\rem +\paran{Different sampling}% We could also use a~slightly different formulation of the sampling lemma suggested by Chan \cite{chan:backward}. It changes the selection of the subgraph~$H$ to choosing an~$mp$-edge subset of~$E(G)$ uniformly at random. The proof is then @@ -1243,7 +1243,7 @@ a~straightforward application of the backward analysis method. We however prefer the Karger's original version, because generating a~random subset of a~given size requires an~unbounded number of random bits in the worst case. -\rem +\paran{On the Pointer Machine}% The only place where we needed the power of the RAM is finding the heavy edges, so we can employ the pointer-machine verification algorithm mentioned in \ref{pmverify} to bring the results of this section to the~PM. diff --git a/biblio.bib b/biblio.bib index cb317ab..3e75568 100644 --- a/biblio.bib +++ b/biblio.bib @@ -1494,3 +1494,17 @@ number = {2}, pages = {247--255}, } + +@article{375847, + author = {Ka Wong Chong and Yijie Han and Tak Wah Lam}, + title = {Concurrent threads and optimal parallel minimum spanning trees algorithm}, + journal = {Journal of the ACM}, + volume = {48}, + number = {2}, + year = {2001}, + issn = {0004-5411}, + pages = {297--323}, + doi = {http://doi.acm.org/10.1145/375827.375847}, + publisher = {ACM}, + address = {New York, NY, USA}, +} diff --git a/opt.tex b/opt.tex index bb07164..bfcfdcf 100644 --- a/opt.tex +++ b/opt.tex @@ -6,10 +6,21 @@ \section{Soft heaps} +A~vast majority of MST algorithms that we have encountered so far is based on +the Tarjan's Blue rule (Lemma \ref{bluelemma}). The rule serves to identify +edges that belong to the MST, while all other edges are left in the process. This +unfortunately means that the later stages of computation spend most of +their time on these useless edges. A~notable exception is the randomized +algorithm of Karger, Klein and Tarjan. It adds an~important ingredient: it uses +Red rule (Lemma \ref{redlemma}) to filter out edges that are guaranteed to stay +outside the MST, so that the graphs with which the algorithm works get smaller +with time. + Recently, Chazelle \cite{chazelle:ackermann} and Pettie \cite{pettie:ackermann} -have presented algorithms for the MST with worst-case time complexity +have presented new deterministic algorithms for the MST which are also based +on the combination of both rules. They have reached worst-case time complexity $\O(m\timesalpha(m,n))$ on the Pointer Machine. We will devote this chapter to their results -and especially to another algorithm by Pettie and Ramachandran \cite{pettie:optimal}, +and especially to another algorithm by Pettie and Ramachandran \cite{pettie:optimal} which is provably optimal. At the very heart of all these algorithms lies the \df{soft heap} discovered by