MST verification has been studied by Koml\'os \cite{komlos:verify}, who has
proven that $\O(m)$ edge comparisons are sufficient, but his algorithm needed
-superlinear time to find the edges to compare. Dixon, Rauch and Tarjan
+super-linear time to find the edges to compare. Dixon, Rauch and Tarjan
have later shown in \cite{dixon:verify} that the overhead can be reduced
to linear time on the RAM using preprocessing and table lookup on small
subtrees. Later, King has given a~simpler algorithm in \cite{king:verifytwo}.
subgraph, only a~small expected number of edges remains.
Selecting a~subgraph at random will unavoidably produce disconnected subgraphs
-at occassion, so we will drop the implicit assumption that all graphs are
+at occasion, so we will drop the implicit assumption that all graphs are
connected for this section and we will always search for the minimum spanning forest.
As we already noted (\ref{disconn}), with a~little bit of care our
algorithms and theorems keep working.
by their weights and we start with an~empty forest~$F$. For each edge, we first
flip a~biased coin (which gives heads with probability~$p$) and if it comes up
tails, we discard the edge. Otherwise we perform a~single step of the Kruskal's
-algoritm: We check whether $F+e$ contains a~cycle. If it does, we discard~$e$, otherwise
+algorithm: We check whether $F+e$ contains a~cycle. If it does, we discard~$e$, otherwise
we add~$e$ to~$F$. At the end, we have produced the subgraph~$H$ and its MSF~$F$.
When we exchange the check for cycles with flipping the coin, we get an~equivalent
We could also use a~slightly different formulation of the sampling lemma
suggested by Chan \cite{chan:backward}. It changes the selection of the subgraph~$H$
to choosing an~$mp$-edge subset of~$E(G)$ uniformly at random. The proof is then
-a~straightforward application of the backward analysis method. We however prefered
+a~straightforward application of the backward analysis method. We however preferred
the Karger's original version, because generating a~random subset of a~given size
requires an~unbounded number of random bits in the worst case.
}
At the beginning, the graph contains no edges, so both invariants are trivially
-satistifed. Newly inserted edges can enter level~0, which cannot break I1 nor~I2.
+satisfied. Newly inserted edges can enter level~0, which cannot break I1 nor~I2.
When we delete a~tree edge at level~$\ell$, we split a~tree~$T$ of~$F_\ell$ to two
trees $T_1$ and~$T_2$. Without loss of generality, let us assume that $T_1$ is the
To prove that the procedure stops, let us notice that no edge is ever recolored,
so we must run out of black edges after at most~$m$ steps. Recoloring
to the same color is avoided by the conditions built in the rules, recoloring to
-a different color would mean that the an edge would be both inside and outside~$T_{min}$
+a different color would mean that the edge would be both inside and outside~$T_{min}$
due to our Red and Blue lemmata.
When no further rules can be applied, the Black lemma guarantees that all edges
labels of the trees they belong to. We scan all edges, map their endpoints
to the particular trees and for each tree we maintain the lightest incident edge
so far encountered. Instead of merging the trees one by one (which would be too
-slow), we build an auxilliary graph whose vertices are the labels of the original
+slow), we build an auxiliary graph whose vertices are the labels of the original
trees and edges correspond to the chosen lightest inter-tree edges. We find connected
components of this graph, these determine how are the original labels translated
to the new labels.
\proof
Every spanning tree of~$G'$ is a spanning tree of~$G$. In the other direction:
Loops can be never contained in a spanning tree. If there is a spanning tree~$T$
-containing a~removed edge~$e$ parallel to an edge~$e'\in G'$, exchaning $e'$
+containing a~removed edge~$e$ parallel to an edge~$e'\in G'$, exchanging $e'$
for~$e$ makes~$T$ lighter. (This is indeed the multigraph version of the Red
lemma applied to a~two-edge cycle, as we will see in \ref{multimst}.)
\qed
\proof
The only non-trivial parts are steps 6 and~7. Contractions can be handled similarly
to the unions in the original Bor\o{u}vka's algorithm (see \ref{boruvkaiter}):
-We build an auxillary graph containing only the selected edges~$e_k$, find
+We build an auxiliary graph containing only the selected edges~$e_k$, find
connected components of this graph and renumber vertices in each component to
the identifier of the component. This takes $\O(m_i)$ time.
The \df{time complexity} of a~decision tree is defined as its depth. It therefore
bounds the number of comparisons spent on every path. (It need not be equal since
some paths need not correspond to an~actual computation --- the sequence of outcomes
-on the path could be unsatifisfiable.)
+on the path could be unsatisfiable.)
A~decision tree is called \df{optimal} if it is correct and its depth is minimum possible
among the correct decision trees for the given graph.
of graph encodings:
\defn
-A~\df{canonical encoding} of a~given labeled graph represented by adjancency lists
+A~\df{canonical encoding} of a~given labeled graph represented by adjacency lists
is obtained by running the depth-first search on the graph and recording its traces.
We start with an~empty encoding. When we enter
a~vertex, we assign an~identifier to it (again using a~yardstick to represent numbers)
both in linear space.
The Fusion trees themselves have very limited use in graph algorithms, but the
-principles behind them are ubiquitious in many other data structures and these
+principles behind them are ubiquitous in many other data structures and these
will serve us well and often. We are going to build the theory of Q-heaps in
Section \ref{qheaps}, which will later lead to a~linear-time MST algorithm
for arbitrary integer weights in Section \ref{iteralg}. Other such structures
$x_0,\ldots,x_{d-1}$.
\para
-If we want to fit the whole vector in a~single word, the parameters $b$ and~$d$ must satisty
+If we want to fit the whole vector in a~single word, the parameters $b$ and~$d$ must satisfy
the condition $(b+1)d\le W$.
By using multiple-precision arithmetics, we can encode all vectors satisfying $bd=\O(W)$.
We will now describe how to translate simple vector manipulations to sequences of $\O(1)$ RAM operations
A~\df{compressed trie} is obtained from the trie by removing the vertices of outdegree~1
except for the root and marked vertices.
-Whereever is a~directed path whose internal vertices have outdegree~1 and they carry
-no mark, we replace this path by a~single edge labeled with the contatenation
+Wherever is a~directed path whose internal vertices have outdegree~1 and they carry
+no mark, we replace this path by a~single edge labeled with the concatenation
of the original edges' labels.
In both kinds of tries, we order the outgoing edges of every vertex by their labels
\proof
Let us analyse the above algorithms. The depth of the recursion is~$n$ and in each
-nested invokation of the recursive procedure we perform a~constant number of operations.
+nested invocation of the recursive procedure we perform a~constant number of operations.
All of them are either trivial, or calculations of factorials (which can be precomputed in~$\O(n)$ time),
or operations on the data structure.
\qed
structure (i.e., on the sequence of deletes performed so far). We can observe that
although the algorithm no longer gives the lexicographic ranks, the unranking function
is still an~inverse of the ranking function, because the sequence of deletes
-from~$A$ is the same when both ranking and unraking.
+from~$A$ is the same when both ranking and unranking.
The implementation of the relaxed structure is straightforward. We store the set~$A$
in an~array~$\alpha$ and use the order of the elements in~$\alpha$ determine the
side counts the same: We can obtain any such configuration by placing $k$~rooks
on~$H$ first, labeling them with elements of~$\{2,\ldots,x\}$, placing
additional $n-k$ rooks on the remaining rows and columns (there are $(n-k)!$ ways
-how to do this) and labeling those of the the new rooks standing on a~hole with~1.
+how to do this) and labeling those of the new rooks standing on a~hole with~1.
\qed
\cor
The diversity of the characterizations of restricted permutations brings
both good and bad news. The good news is that we can use the
plethora of known results on bipartite matchings. Most importantly, we can efficiently
-determine whether there exists at least one permutation satistying a~given set of restrictions:
+determine whether there exists at least one permutation satisfying a~given set of restrictions:
\thm
There is an~algorithm which decides in time $\O(n^{1/2}\cdot m)$ whether there exists
be looked up in a~table precalculated in quadratic time as shown in Corollary~\ref{nzeroprecalc}.
\qed
-
-
\endpart