\section{Special cases and related problems}
Towards the end of our story of the minimum spanning trees, we will now focus our attention
-on various special cases of our problem and also to several related problems that
+on various special cases of the MST problem and also to several related problems that
frequently arise in practice.
\paran{Graphs with sorted edges}\id{sortededges}%
When the edges of the given graph are already sorted by their weights, we can use the Kruskal's
algorithm to find the MST in time $\O(m\timesalpha(n))$ (Theorem \ref{kruskal}).
-We however can do better: As the minimality of a~spanning tree depends only on the
+We can however do better: As the minimality of a~spanning tree depends only on the
order of weights and not on the actual values (The Minimality Theorem, \ref{mstthm}), we can
renumber the weights to $1, \ldots, m$ and find the MST using the Fredman-Willard
algorithm for integer weights. According to Theorem \ref{intmst} it runs in
therefore once again replace the edge weights by integers and use the linear-time
integer algorithm. While the other FP representations (see \cite{dgoldberg:fp} for
an~overview) need not have this property, the corresponding integers can be adjusted
-in $\O(1)$ time to the format we need. (More advanced tricks of this type have been
-employed by Thorup in \cite{thorup:floatint} to extend his linear-time algorithm
+in $\O(1)$ time to the format we need using bit masking. (More advanced tricks of this type have been
+employed by Thorup \cite{thorup:floatint} to extend his linear-time algorithm
for single-source shortest paths to FP edge lengths.)
\paran{Graphs with bounded degrees}
new~$G'$ by contracting the path, the rest follows from the Contraction lemma
(\ref{contlemma}).
-This step can be carried out in time $\O(d)$. As it replaces a high-degree
-vertex by vertices of degree~3, the whole procedure stops in at most~$n$ such
-steps, so it takes time $\O(\sum_{v\in V}\deg(v)) = \O(m)$ including the
+This step can be carried out in time $\O(d)$. It replaces a high-degree
+vertex by vertices of degree~3 and it does not change degrees of any other
+vertices. So the whole procedure stops in at most~$n$ such
+steps, so it takes time $\O(\sum_{v\in V}\deg(v)) = \O(m)$, including the
time needed to find the high-degree vertices at the beginning.
\qed
There is a~more efficient method based on the observation that the MST
is always a~subgraph of the Delaunay's tesselation for the given points
-(this was first noted by Shamos and Hoey in~\cite{shamos:closest}). The
+(this was first noted by Shamos and Hoey \cite{shamos:closest}). The
tesselation is a~planar graph, which guarantees that it has $\O(n)$ edges,
and it is a~dual graph of the Voronoi diagram of the given points, which can
be constructed in time $\O(n\log n)$ using for example the Fortune's
We can also define it in terms of graphs:
\defn A~\df{Steiner tree} of a~weighted graph~$(G,w)$ with a~set~$M\subseteq V$
-of \df{mandatory notes} is a~tree~$T\subseteq G$ that contains all the mandatory
+of \df{mandatory vertices} is a~tree~$T\subseteq G$ that contains all the mandatory
vertices and its weight is minimum possible.
-For $M=V$ the Steiner tree is identical to the MST, but if we allow an~arbitrary
+When $M=V$, the Steiner tree is identical to the MST, but if we allow an~arbitrary
choice of the mandatory vertices, it is NP-hard. This has been proven by Garey and Johnson
\cite{garey:steiner,garey:rectisteiner} for not only the graph version with
weights $\{1,2\}$, but also for the planar version with Euclidean or rectilinear
-metric. There is a~polynomial approximation algorithm with ratio $5/3$ for
+metric. There is a~polynomial-time approximation algorithm with ratio $5/3$ for
graphs due to Pr\"omel and Steger \cite{proemel:steiner} and a~polynomial-time
approximation scheme for the Euclidean Steiner tree in an~arbitrary dimension
by Arora \cite{arora:tspapx}.
the weight of the MST of a~graph with average degree~$d$ and edge weights from the set
$\{1,\ldots,w\}$ in time $\O(dw\varepsilon^{-2}\cdot\log(dw/\varepsilon))$,
producing a~weight which has relative error at most~$\varepsilon$ with probability
-at least $3/4$. They have also proven an~almost matching lower bound $\Omega(dw\varepsilon^{-2})$.
+at least $3/4$. They have also proven a~close lower bound $\Omega(dw\varepsilon^{-2})$.
For the $d$-dimensional Euclidean case, there is a~randomized approximation
algorithm by Czumaj et al.~\cite{czumaj:euclidean} which with high probability
Moret and Shapiro \cite{moret:expmst} have conducted an~extensive experimental
study of performance of implementations of various MST algorithms on different
computers. They have tested the algorithms on several sets of graphs, varying
-in the number of vertices (up to millions) and edge density (from constant to
-$n^{1/2}$. In almost all tests, the quickest algorithm was an~ordinary Prim's
+in the number of vertices (up to millions) and in edge density (from constant to
+$n^{1/2}$. In almost all tests, the winner was an~ordinary Prim's algorithm
implemented with pairing heaps \cite{fredman:pairingheap}. The pairing heaps
are known to perform surprisingly well in practice, but they still elude attempts
at complete theoretical analysis. So far the best results are those of Pettie
\cite{pettie:pairing}, who has proven that deletion of the minimum takes $\O(\log n)$
-time and all other operations $\O(2^{2\scriptscriptstyle\sqrt{\log\log n}})$; both bounds are amortized.
+time and all other operations take $\O(2^{2\scriptscriptstyle\sqrt{\log\log n}})$; both bounds are amortized.
The Moret's study however completely misses variants of the Bor\o{u}vka's algorithm
and many of those have very promising characteristics, especially for planar graphs
\paran{Parallel algorithms}
Most of the early parallel algorithms for the MST are variants of the Bor\o{u}vka's algorithm.
-The operations on the individual trees are independent of each other, so they can run in parallel.
+The operations on the individual trees are independent of each other, so they can be carried out in parallel.
There are $\O(\log n)$ steps and each of them can be executed in $\O(\log n)$ parallel time using
standard PRAM techniques (see \cite{jaja:parallel} for the description of the model).