The contractive algorithm given in Section~\ref{contalg} has been found to perform
well on planar graphs, but in the general case its time complexity was not linear.
-Can we find any broader class of graphs where this algorithm is still efficient?
+Can we find any broader class of graphs where this algorithm is still linear?
The right context turns out to be the minor-closed graph classes, which are
closed under contractions and have bounded density.
\thmn{Excluded minors, Robertson \& Seymour \cite{rs:wagner}}
For every non-trivial minor-closed graph class~$\cal C$ there exists
a~finite set~$\cal H$ of graphs such that ${\cal C}=\Forb({\cal H})$.
+\qed
-\proof
This theorem has been proven in a~long series of papers on graph minors
culminating with~\cite{rs:wagner}. See this paper and follow the references
to the previous articles in the series.
-\qed
\para
For analysis of the contractive algorithm,
Let us return to the analysis of our algorithm.
-\thmn{MST on minor-closed classes, Mare\v{s} \cite{mm:mst}}\id{mstmcc}%
+\thmn{MST on minor-closed classes, Tarjan \cite{tarjan:dsna}}\id{mstmcc}%
For any fixed non-trivial minor-closed class~$\cal C$ of graphs, the Contractive Bor\o{u}vka's
algorithm (\ref{contbor}) finds the MST of any graph of this class in time
$\O(n)$. (The constant hidden in the~$\O$ depends on the class.)
\figure{hexangle.eps}{\epsfxsize}{The construction from Remark~\ref{hexa}}
\rem
-The observation in~Theorem~\ref{mstmcc} was also independently made by Gustedt \cite{gustedt:parallel},
-who studied a~parallel version of the Contractive Bor\o{u}vka's algorithm applied
+The observation in~Theorem~\ref{mstmcc} was also used by Gustedt \cite{gustedt:parallel},
+to construct parallel version of the Contractive Bor\o{u}vka's algorithm applied
to minor-closed classes.
\rem
\qed
\cor
-For graphs with edge density at least $\log n$, this algorithm runs in linear time.
+For graphs with edge density $\Omega(\log n)$, this algorithm runs in linear time.
\remn{Other heaps}%
We can consider using other kinds of heaps that have the property that inserts
\algin A~graph~$G$ with an edge comparison oracle.
\:$T\=\emptyset$. \cmt{edges of the MST}
\:$\ell(e)\=e$ for all edges~$e$. \cmt{edge labels as usually}
-\:$m_0\=m$.
+\:$m_0\=m$. \cmt{in the following, $n$ and $m$ will change with the graph}
\:While $n>1$: \cmt{We will call iterations of this loop \df{phases}.}
\::$F\=\emptyset$. \cmt{forest built in the current phase}
\::$t\=2^{\lceil 2m_0/n \rceil}$. \cmt{the limit on heap size}
The Iterated Jarn\'\i{}k's algorithm runs in time $\O(m\log^* n)$.
\proof
-$\beta(m,n) \le \beta(1,n) \le \log^* n$.
+$\beta(m,n) \le \beta(n,n) \le \log^* n$.
\qed
\cor\id{ijdens}%
For every edge~$e=uv$, we consider the set $Q_e$ of all query paths containing~$e$.
The vertex of a~path, that is closer to the root, will be called the \df{top} of the path,
the other vertex its \df{bottom.}
-We define arrays $T_e$ and~$P_e$ as follows: $T_e$ contains
+We define arrays $T_e$ and~$P_e$ as follows: $T_e$~contains
the tops of the paths in~$Q_e$ in order of their increasing depth (we
will call them \df{active tops} and each of them will be stored exactly once). For
each active top~$t=T_e[i]$, we define $P_e[i]$ as the peak of the path $T[v,t]$.
going from the $(i+1)$-th to the $i$-th level of the tree.
The levels are numbered from the bottom, so leaves are at level~0 and the root
is at level $\ell\le \lceil \log_2 n\rceil$. There are $n_i\le n/2^i$ vertices
-at the $i$-th level, so we consider exactly $n_i$ edges. To avoid taking a~logarithm
+at the $i$-th level, so we consider exactly $n_i$ edges. To avoid taking a~logarithm\foot{All logarithms are binary.}
of zero, we define $\vert T_e\vert=1$ for $T_e=\emptyset$.
\def\eqalign#1{\null\,\vcenter{\openup\jot
\ialign{\strut\hfil$\displaystyle{##}$&$\displaystyle{{}##}$\hfil
comparisons. Since we (as always) assume that~$G$ is connected, $\O(m+n)=\O(m)$.
\qed
-\paran{Applications}%
+\paran{Other applications}%
The problem of computing path maxima or minima in a~weighted tree has several other interesting
applications. One of them is computing minimum cuts separating given pairs of vertices in a~given
weighted undirected graph~$G$. We construct a~Gomory-Hu tree~$T$ for the graph as described in \cite{gomoryhu}
\qed
\lemma\id{verfh}%
-The procedure \<FindPeaks> processes an~edge~$e$ in time $\O(\log \vert T_e\vert + q_e)$,
+\<FindPeaks> processes an~edge~$e$ in time $\O(\log \vert T_e\vert + q_e)$,
where $q_e$~is the number of query paths having~$e$ as its bottom edge.
\proof
\qeditem
\endlist
-\>We are now ready to combine these steps and get the following theorem:
+\>We now have all the necessary ingredients to prove the following theorem
+and thus conclude this section:
\thmn{Verification of MST on the RAM}\id{ramverify}%
There is a~RAM algorithm which for every weighted graph~$G$ and its spanning tree~$T$
flips in step~2 of this algorithm. We also know that the algorithm stops before
it adds $n$~edges to~$F$. Therefore it flips at most as many coins as a~simple
random process that repeatedly flips until it gets~$n$ heads. As waiting for
-every occurrence of heads takes expected time~$1/p$, waiting for~$n$ heads
-must take $n/p$. This is the bound we wanted to achieve.
+every occurrence of heads takes expected time~$1/p$ (the distribution is geometric),
+waiting for~$n$ heads must take $n/p$. This is the bound we wanted to achieve.
\qed
\para
of the plain contractive algorithm, while the average case is linear.
\lemma
-For every subproblem~$G_v$, the KKT algorithm runs in time $\O(m_v+n_v)$ plus the time
-spent on the recursive calls.
+For every subproblem~$G_v$, the KKT algorithm spends $\O(m_v+n_v)$ time plus the cost
+of the recursive calls.
\proof
We know from Lemma \ref{contiter} that each Bor\o{u}vka step takes time $\O(m_v+n_v)$.\foot{We
-add $n_v$ as the graph could be disconnected.}
+need to add $n_v$, because the graph could be disconnected.}
The selection of the edges of~$H_v$ is straightforward.
Finding the $F_v$-heavy edges is not, but we have already shown in Theorem \ref{ramverify}
that linear time is sufficient on the RAM.