\qed
\rem
-Minor-closed classes share many other interesting properties, as shown for
-example by Theorem 6.1 of \cite{nesetril:minors}.
+Minor-closed classes share many other interesting properties, for example bounded chromatic
+numbers of various kinds, as shown by Theorem 6.1 of \cite{nesetril:minors}.
+
+Let us return to the analysis of our algorithm.
\thmn{MST on minor-closed classes \cite{mm:mst}}\id{mstmcc}%
For any fixed non-trivial minor-closed class~$\cal C$ of graphs, the Contractive Bor\o{u}vka's
So they also belong to~$\cal C$ and by the previous theorem $m_i\le \varrho({\cal C})\cdot n_i$.
\qed
-\rem\id{nobatch}%
+\paran{Local contractions}\id{nobatch}%
The contractive algorithm uses ``batch processing'' to perform many contractions
in a single step. It is also possible to perform contractions one edge at a~time,
batching only the flattenings. A~contraction of an edge~$uv$ can be done
${\rm Pr}[X > 4\varrho] < 1/2$, so for at least $n/2$ vertices~$v$ we have
$\deg(v)\le 4\varrho$.
-\algn{Local Bor\o{u}vka's Algorithm \cite{mm:mst}}%
+\algn{Local Bor\o{u}vka's Algorithm, Mare\v{s} \cite{mm:mst}}%
\algo
\algin A~graph~$G$ with an edge comparison oracle and a~parameter~$t\in{\bb N}$.
\:$T\=\emptyset$.
The whole algorithm therefore runs in time $\O(\sum_i m_i) = \O(\sum_i n/2^i) = \O(n)$.
\qed
-\rem
+\paran{Back to planar graphs}%
For planar graphs, we can get a sharper version of the low-degree lemma,
showing that the algorithm works with $t=8$ as well (we had $t=12$ as
$\varrho=3$). While this does not change the asymptotic time complexity
\figure{hexangle.eps}{\epsfxsize}{The construction from Remark~\ref{hexa}}
\rem
-The observation in~Theorem~\ref{mstmcc} was also made by Gustedt in~\cite{gustedt:parallel},
+The observation in~Theorem~\ref{mstmcc} was also independently made by Gustedt in~\cite{gustedt:parallel}
who studied a~parallel version of the contractive Bor\o{u}vka's algorithm applied
to minor-closed classes.
\cor
For graphs with edge density at least $\log n$, this algorithm runs in linear time.
-\rem
+\remn{Other heaps}%
We can consider using other kinds of heaps that have the property that inserts
and decreases are faster than deletes. Of course, the Fibonacci heaps are asymptotically
optimal (by the standard $\Omega(n\log n)$ lower bound on sorting by comparisons, see
and $\O(\log\log n)$ time \<DeleteMin>. (We will however omit the details since we will
show a~faster integer algorithm soon.)
-\para
+\paran{Combining MST algorithms}%
As we already noted, the improved Jarn\'\i{}k's algorithm runs in linear time
for sufficiently dense graphs. In some cases, it is useful to combine it with
another MST algorithm, which identifies a~part of the MST edges and contracts
and both trees can be combined in linear time, too.
\qed
-\para
+\paran{Iterating Jarn\'\i{}k's algorithm}%
Actually, there is a~much better choice of the algorithms to combine: use the
Active Edge Jarn\'\i{}k's algorithm multiple times, each time stopping after a~while.
A~good choice of the stopping condition is to place a~limit on the size of the heap.
Given a~weighted tree~$T$ and a~set of \df{query paths} $Q \subseteq \{ T[u,v] ; u,v\in V(T) \}$
specified by their endpoints, find the heaviest edge (\df{peak}) for every path in~$Q$.
-\para
+\paran{Bor\o{u}vka trees}%
Finding the peaks can be burdensome if the tree~$T$ is degenerated,
so we will first reduce it to the same problem on a~balanced tree. We run
the Bor\o{u}vka's algorithm on~$T$, which certainly produces $T$ itself, and we
The peak of every original query path is then the heavier of the peaks of its halves.
\qed
-\para
+\paran{Bounding comparisons}%
We will now describe a~simple variant of the depth-first search which finds the
peaks of all query paths of the transformed problem. As we promised,
we will take care of the number of comparisons only, as long as all other operations
To achieve the linear time complexity, the algorithm needs a very careful implementation,
but we defer the technical details to section~\ref{bucketsort}.
-\para
+\paran{General contractions}%
Graph contractions are indeed a~very powerful tool and they can be used in other MST
algorithms as well. The following lemma shows the gist:
\rem
In the previous algorithm, the role of the mapping~$\pi^{-1}$ is of course played by the edge labels~$\ell$.
-\para
+\paran{A~lower bound}%
Finally, we will show a family of graphs where the $\O(m\log n)$ bound on time complexity
is tight. The graphs do not have unique weights, but they are constructed in a way that
the algorithm never compares two edges with the same weight. Therefore, when two such
This guarantees that $k\le W^{1/4}$ and $\O(2^{k^4}) = \O(N)$. Let us however
remark that the whole construction is primarily of theoretical importance
and that the huge constants involved everywhere make these heaps useless
-in practical algorithms. Many of the tricks used however prove themselves
-useful even in real-life implementations.
+in practical algorithms. Despise this, many of the tricks we develop have proven
+themselves useful even in real-life implementations.
Spending the time on reprocessing makes it possible to precompute tables for
almost arbitrary functions and then assume that they can be evaluated in
to observe that $2^{\O(k^3)}\cdot \poly(k) = \O(2^{k^4})$.
\qed
-\para
+\paran{Tries and ranks}%
We will first show an~auxiliary construction based on tries and then derive
the real definition of the Q-heap from it.
except for the root and marked vertices.
Whereever is a~directed path whose internal vertices have outdegree~1 and they carry
no mark, we replace this path by a~single edge labeled with the contatenation
-of the original edge's labels.
+of the original edges' labels.
In both kinds of tries, we order the outgoing edges of every vertex by their labels
lexicographically.
case, $x[b]=1$ and $x_j[b]=0$, so they are smaller.
\qed
-\para
+\paran{A~better representation}%
The preceding lemma shows that the rank can be computed in polynomial time, but
unfortunately the variables on which it depends are too large for a~table to
be efficiently precomputed. We will carefully choose an~equivalent representation
\algout The $i$-th smallest element in the heap.
\endalgo
-\para
+\paran{Extraction}%
The heap algorithms we have just described have been built from primitives
operating in constant time, with one notable exception: the extraction
$x[B]$ of all bits of~$x$ at positions specified by the set~$B$. This cannot be done
time using the results of section \ref{bitsect} and Lemma \ref{qhxtract}.
\qed
-\rem
+\paran{Combining Q-heaps}%
We can also use the Q-heaps as building blocks of more complex structures
like Atomic heaps and AF-heaps (see once again \cite{fw:transdich}). We will
show a~simpler, but useful construction, sometimes called the \df{Q-heap tree.}