year = "2000"
}
+@article{ chazelle:almostacker,
+ title={{A faster deterministic algorithm for minimum spanning trees}},
+ author={Chazelle, B.},
+ journal={Proceedings of the 38th Annual Symposium on Foundations of Computer Science (FOCS'97)},
+ pages={22},
+ year={1997},
+ publisher={IEEE Computer Society Washington, DC, USA}
+}
+
@article{ nesetril:history,
author = "Jaroslav Ne{\v{s}}et{\v{r}}il",
title = "{Some remarks on the history of MST-problem}",
url = "citeseer.ist.psu.edu/dixon92verification.html"
}
-inproceedings{ pettie:minirand,
+@inproceedings{ pettie:minirand,
author = {Seth Pettie and Vijaya Ramachandran},
title = {Minimizing randomness in minimum spanning tree, parallel connectivity, and set maxima algorithms},
booktitle = {SODA '02: Proceedings of the thirteenth annual ACM-SIAM Symposium on Discrete Algorithms},
corrupted edges in the neighborhood of these subgraphs.
\endalgo
-\thmn{Partitioning to contractible subgraphs, Pettie \cite{pettie:optimal}}\id{partthm}%
+\thmn{Partitioning to contractible subgraphs, Pettie and Ramachandran \cite{pettie:optimal}}\id{partthm}%
Given a~weighted graph~$G$ and parameters $\varepsilon$ ($0<\varepsilon\le 1/2$)
and~$t$, the Partition algorithm \ref{partition} constructs a~collection
$\C=\{C_1,\ldots,C_k\}$ of subgraphs and a~set~$R^\C$ of corrupted edges such that:
\section{Decision trees}
-The Pettie's algorithm combines the idea of robust partitioning with optimal decision
+The Pettie's and Ramachandran's algorithm combines the idea of robust partitioning with optimal decision
trees constructed by brute force for very small subgraphs. In this section, we will
explain the basics of the decision trees and prove several lemmata which will
turn out to be useful for the analysis of time complexity of the whole algorithm.
\section{An optimal algorithm}
Once we have developed the soft heaps, partitioning and MST decision trees,
-it is now simple to state the Pettie's MST algorithm \cite{pettie:optimal}
+it is now simple to state the Pettie's and Ramachandran's MST algorithm \cite{pettie:optimal}
and prove that it is asymptotically optimal among all MST algorithms in
comparison-based models. Several standard MST algorithms from the previous
chapters will play their roles.
of the Contractive Bor\o{u}vka's algorithm. The resulting graph will have both the number of
vertices and edges reduced by a~constant factor and we recurse on it.
-\algn{Optimal MST algorithm, Pettie \cite{pettie:optimal}}\id{optimal}%
+\algn{Optimal MST algorithm, Pettie and Ramachandran \cite{pettie:optimal}}\id{optimal}%
\algo
\algin A~connected graph~$G$ with an~edge comparison oracle.
\:If $G$ has no edges, return an~empty tree.
corrupted. The iterations of the Bor\o{u}vka's algorithm on~$G_B$ take $\O(m)$
time by Lemma \ref{boruvkaiter} and they produce a~graph~$G_C$ with at most~$n/4$
vertices and at most $n/4 + m/4 \le m/2$ edges. (The $n$~tree edges in~$G_B$ are guaranteed
-to be reduced by the Bor\o{u}vka's algorithm.)
+to be reduced by the Bor\o{u}vka's algorithm.) It is easy to verify that this
+graph is still connected, so we can recurse on it.
The remaining steps of the algorithm can be easily performed in linear time either directly
or in case of the contractions by the bucket-sorting techniques of Section \ref{bucketsort}.
section, will help us show that the time complexity recurrence is satisfied by the
decision tree complexity $D(m,n)$ itself. This way, we prove the following theorem:
-\thmn{Optimality of Pettie's algorithm}
+\thmn{Optimality of the Optimal algorithm}
The time complexity of the Optimal MST algorithm \ref{optimal} is $\Theta(D(m,n))$.
\proof
the time complexity of every comparison-based algorithm.
\qed
-\FIXME{Ackermann}
+\paran{Complexity}
+As we have already noted, the exact decision tree complexity $D(m,n)$ of the MST problem
+is still open and so is therefore the time complexity of the optimal algorithm. However,
+every time we come up with another comparison-based algorithm, we can use its complexity
+(or more specifically the number of comparisons it performs, which can be even lower)
+as an~upper bound for the optimal algorithm.
+
+The best explicit comparison-based algorithm known to date achieves complexity $\O(m\timesalpha(m,n))$.
+It has been discovered by Chazelle \cite{chazelle:ackermann} as an~improvement of his previous
+$\O(m\timesalpha(m,n)\cdot\log\alpha(m,n))$ algorithm \cite{chazelle:almostacker}.
+It is also based on the ideas of this section --- in particular the soft heaps and robust contractions.
+The algorithm is unfortunately very complex and it involves a~lot of elaborate
+technical details, so we will only refer to the original paper here. Another algorithm of the same
+complexity, based on similar ideas, has been discovered independently by Pettie \cite{pettie:ackermann}, who,
+having the optimal algorithm at hand, does not take care about the low-level details and only
+bounds the number of comparisons. Using any of these results, we can prove an~Ackermannian
+upper bound on the optimal algorithm:
+
+\thmn{Upper bound on complexity of the Optimal algorithm}
+The time complexity of the Optimal MST algorithm is bounded by $\O(m\timesalpha(m,n))$.
-\FIXME{Consequences for models}
+\proof
+Bound $D(m,n)$ by the number of comparisons performed by the algorithms of Chazelle \cite{chazelle:ackermann}
+or Pettie \cite{pettie:ackermann}.
+\qed
+
+\rem
+It is also known from \cite{pettie:optimal} that when we run the Optimal algorithm on a~random
+graph drawn from either $G_{n,p}$ (random graphs on~$n$ vertices, each edge is included with probability~$p$
+independently on the other edges) or $G_{n,m}$ (we draw the graph uniformly at random from the
+set of all graphs with~$n$ vertices and $m$~edges), it runs in linear time with high probability,
+regardless of the edge weights.
+
+\paran{Models of computation}
+Another important consequence of the optimal algorithm is that when we aim for a~linear-time
+MST algorithm (or for proving that it does not exist), we do not need to care about computational
+models at all. The elaborate RAM data structures of Chapter \ref{ramchap}, which have helped us
+so much in the case of integer weights, cannot help if we are allowed to access the edge weights
+by performing comparisons only. We can even make use of non-uniform objects given by some sort
+of oracle. Indeed, whatever trick we employ to achieve linear time complexity, we can mimic it in the
+world of decision trees and thus we can use it to show that the algorithm we already knew is
+also linear.
+
+This however applies to deterministic algorithms only --- we already know that access to a~source
+of random bits allows us to compute the MST in expected linear time (the KKT Algorithm, \ref{kkt}).
+There were attempts to derandomize the KKT algorithm, but so far the best result in this direction
+is the randomized algorithm also by Pettie \cite{pettie:minirand} which achieves expected linear time
+complexity with only $\O(\log^* n)$ random bits.
-\FIXME{Random graphs}
\endpart