}
At the beginning, the graph contains no edges, so both invariants are trivially
-satisfied. Newly inserted edges can enter level~0, which cannot break I1 nor~I2.
+satisfied. Newly inserted edges enter level~0, which cannot break I1 nor~I2.
When we delete a~tree edge at level~$\ell$, we split a~tree~$T$ of~$F_\ell$ to two
trees $T_1$ and~$T_2$. Without loss of generality, let us assume that $T_1$ is the
level that connects the spanning tree back. From I1, we know that such an~edge cannot belong to
a~level greater than~$\ell$, so we start looking for it at level~$\ell$. According
to~I2, the tree~$T$ had at most $\lfloor n/2^\ell\rfloor$ vertices, so $T_1$ has
-at most $\lfloor n/2^{\ell+1} \rfloor$ of them. Thus we can increase the levels
-of all edges of~$T_1$ without violating either invariant.
+at most $\lfloor n/2^{\ell+1} \rfloor$ of them. Thus we can move all level~$\ell$
+edges of~$T_1$ to level~$\ell+1$ without violating either invariant.
We now start enumerating the non-tree edges incident with~$T_1$. Each such edge
is either local to~$T_1$ or it joins $T_1$ with~$T_2$. We will therefore check each edge
whether its other endpoint lies in~$T_2$ and if it does, we have found the replacement
edge, so we insert it to~$F_\ell$ and stop. Otherwise we move the edge one level up. (This
-will be the grist for the mill of our amortization argument: We can charge most of the work at level
+will be the grist for the mill of our amortization argument: We can charge most of the work on level
increases and we know that the level of each edge can reach at most~$L$.)
If the non-tree edges at level~$\ell$ are exhausted, we try the same in the next
remains disconnected.
\impl
-For each level, we will use a~separate ET-tree ${\cal E}_\ell$ with~$a$ set to~2,
-which will represent the forest~$F_i$ and the non-tree edges at that particular level.
+For each level~$\ell$, we will use a~separate ET-tree ${\cal E}_\ell$ with~$a$ set to~2,
+which will represent the forest~$F_\ell$ and the non-tree edges at that particular level.
Besides operations on the non-tree edges, we also need to find the tree edges of level~$\ell$
when we want to bring them one level up. This can be accomplished either by modifying the ET-trees
to attach two lists of edges attached to vertices instead of one, or by using a~second ET-tree.
\algout The replacement edge.
\endalgo
-\>As promised, time complexity will be analysed by amortization on the levels.
+\>As foretold, time complexity will be analysed by amortization on the levels.
\thmn{Fully dynamic connectivity, Holm et al.~\cite{holm:polylog}}\id{dyncon}%
Dynamic connectivity can be maintained in time $\O(\log^2 n)$ amortized per
\section{Dynamic spanning forests}\id{dynmstsect}%
-Let us turn our attention back to the dynamic MSF now.
+Let us turn our attention back to the dynamic MSF.
Most of the early algorithms for dynamic connectivity also imply $\O(n^\varepsilon)$
algorithms for dynamic maintenance of the MSF. Henzinger and King \cite{henzinger:twoec,henzinger:randdyn}
have generalized their randomized connectivity algorithm to maintain the MSF in $\O(\log^5 n)$ time per
the lightest possible choice. We will therefore use the weighted version of the ET-trees (Corollary \ref{wtet})
and scan the lightest non-tree edge incident with the examined tree first. We must ensure
that the lower levels cannot contain a~lighter replacement edge, but fortunately the
-light edges tend to ``bubble up'' in the hierarchy of levels. This can be formalized as
-the following invariant:
+light edges tend to ``bubble up'' in the hierarchy of levels. This can be formalized
+in form of the following invariant:
{\narrower
\def\iinv{{\bo I\the\itemcount~}}
the edges $f_1$ and~$f_2$, which together form a~simple cycle.
\qed
-We now have to make sure that the additional invariant is indeed observed:
+We now have to make sure that the additional invariant is really observed:
\lemma\id{ithree}%
After every operation, the invariant I3 is satisfied.
computation just before this increase and we will prove that all other edges
of~$C$ already are at levels greater than~$\ell$, so the violation cannot occur.
-Let us first show this for edges of~$C$ incident with~$T_1$. All edges of~$T_1$ itself
+Let us first show that for edges of~$C$ incident with~$T_1$. All edges of~$T_1$ itself
already are at the higher levels as they were moved there at the very beginning of the
search for the replacement edge. The other tree edges incident with~$T_1$ would have
lower levels, which is impossible since the invariant would be already violated.
When an~edge is inserted, we union it with some of the $A_i$'s, build a~new decremental structure
and amortize the cost of the build over the insertions. Deletes of non-tree edges are trivial.
-Delete of a~non-tree edge is performed on all $A_i$'s containing it and the replacement edge is
+Delete of a~tree edge is performed on all $A_i$'s containing it and the replacement edge is
sought among the replacement edges found in these structures. The unused replacement edges then have
to be reinserted back to the structure.
likely hit I1 before we managed to skip the levels of all the heaviest edges on the
particular cycles.
-On the other hand, if we decided to drop I3, we would encounter different problems. The ET-trees can
+On the other hand, if we decided to drop I3, we would encounter different obstacles. The ET-trees can
bring the lightest non-tree incident with the current tree~$T_1$, but the lightest replacement edge
could also be located in the super-trees of~$T_1$ at the lower levels, which are too large to scan
and both I1 and I2 prevent us from charging the time on increasing levels there.
edges have the same weight. This leads to the following algorithm for dynamic MSF
on~graphs with a~small set of allowed edge weights. It is based on an~idea similar
to the $\O(k\log^3 n)$ algorithm of Henzinger and King \cite{henzinger:randdyn},
-but adapted to use the better results on dynamic connectivity we have at hand.
+but have adapted it to use the better results on dynamic connectivity we have at hand.
\paran{Dynamic MSF with limited edge weights}%
Let us assume for a~while that our graph has edges of only two different weights (let us say
the MST's from Section \ref{mstbasics} still hold.
We split the graph~$G$ to two subgraphs~$G_1$ and~$G_2$ according to the edge
-weights. We use one instance~$\C_1$ of the dynamic connectivity algorithm maintaining
+weights. We use one instance~$\C_1$ of the dynamic connectivity algorithm to maintain
an~arbitrary spanning forest~$F_1$ of~$G_1$, which is obviously minimum. Then we add
another instance~$\C_2$ to maintain a~spanning forest~$F_2$ of the graph $G_2\cup F_1$
such that all edges of~$F_1$ are forced to be in~$F_2$. Obviously, $F_2$~is the
MSF of the whole graph~$G$ --- if any edge of~$F_1$ were not contained in~$\msf(G)$,
-we could use the standard exchange argument to create an~even lighter spanning tree.\foot{This
-is of course the Blue lemma in action, but we have to be careful as we did not have proven it
-for graphs with multiple edges of the same weight.}
+we could use the standard exchange argument to create an~even lighter spanning tree.
When a~weight~2 edge is inserted to~$G$, we insert it to~$\C_2$ and it either enters~$F_2$
or becomes a~non-tree edge. Similarly, deletion of a~weight~2 edge is a~pure deletion in~$\C_2$,
because such edges can be replaced only by other weight~2 edges.
Insertion of edges of weight~1 needs more attention: We insert the edge to~$\C_1$. If~$F_1$
-stays unchanged, we are done. If the new edge enters~$F_1$, we use Sleator-Tarjan trees
+stays unchanged, we are done. If the new edge enters~$F_1$, we use a~Sleator-Tarjan tree
kept for~$F_2$ to check if the new edge covers some tree edge of weight~2. If this is not
the case, we insert the new edge to~$\C_2$ and hence also to~$F_2$ and we are done.
Otherwise we exchange one of the covered weight~2 edges~$f$ for~$e$ in~$\C_2$. We note
We will show a~variant of their algorithm based on the MST verification
procedure of Section~\ref{verifysect}.
-In this section, we will require the edge weights to be real numbers (or integers), because
+In this section, we will require the edge weights to be numeric, because
comparisons are certainly not sufficient to determine the second best spanning tree. We will
assume that our computation model is able to add, subtract and compare the edge weights
in constant time.
\proof
We know from the Monotone exchange lemma (\ref{monoxchg}) that $T_1$ can be transformed
-to~$T_i$ by a~sequence of edge exchanges which never decreases tree weight. The last
+to~$T_i$ by a~sequence of edge exchanges which never decrease tree weight. The last
exchange in this sequence therefore obtains~$T_i$ from a~tree of the desired properties.
\qed
edge exchange. It remains to find which exchange it is. Let us consider the exchange
of an~edge $f\in E\setminus T_1$ with an~edge $e\in T_1[f]$. We get a~tree $T_1-e+f$
of weight $w(T_1)-w(e)+w(f)$. To obtain~$T_2$, we have to find~$e$ and~$f$ such that the
-difference $w(f)-w(e)$ is the minimum possible. Thus for every~$f$, the edge $e$~must be always
-the heaviest on the path $T_1[f]$. We can now apply the algorithm from Corollary \ref{rampeaks}
+difference $w(f)-w(e)$ is the minimum possible. Thus for every~$f$, the edge~$e$~must be always
+the heaviest on the path $T_1[f]$. We can apply the algorithm from Corollary \ref{rampeaks}
and find the heaviest edges (peaks) of all such paths and thus examine all possible choices of~$f$
in linear time. So we get:
edges $e\in T$ and $f\in H\setminus T$ such that $w(f)-w(e)$ is minimum.
\nota
-We will call this \df{finding the best exchange in $(H,T)$.}
+We will call this procedure \df{finding the best exchange in $(H,T)$.}
\cor
Given~$G$ and~$T_1$, we can find~$T_2$ in time $\O(m)$.
\paran{Further spanning trees}%
The construction of auxiliary graphs can be iterated to obtain $T_1,\ldots,T_K$
for an~arbitrary~$K$. We will build a~\df{meta-tree} of auxiliary graphs. Each node of this meta-tree
-is assigned a~graph\foot{This graph is always derived from~$G$ by a~sequence of edge deletions
+carries a~graph\foot{This graph is always derived from~$G$ by a~sequence of edge deletions
and contractions. It is tempting to say that it is a~minor of~$G$, but this is not true as we
preserve multiple edges.} and its minimum spanning tree. The root node contains~$(G,T_1)$,
its sons have $(G_1,T_1/e)$ and $(G_2,T_2)$. When $T_3$ is obtained by an~exchange
-in one of these sons, we attach two new leaves to that son and we assign to them the two auxiliary
+in one of these sons, we attach two new leaves to that son and we let them carry the two auxiliary
graphs derived by contracting or deleting the exchanged edge. Then we find the best
edge exchanges among all leaves of the new meta-tree and repeat the process. By Observation \ref{tbobs},
each spanning tree of~$G$ is generated exactly once. The Difference lemma guarantees that
Given~$G$ and~$T_1$, we can find $T_2,\ldots,T_K$ in time $\O(Km + K\log K)$.
\proof
-Generating each~$T_i$ requires finding the best exchange for two graphs and $\O(1)$
+Generating each~$T_i$ requires finding the best exchange for two graphs and also $\O(1)$
operations on the heap. The former takes $\O(m)$ according to Corollary \ref{rampeaks},
and each heap operation takes $\O(\log K)$.
\qed
+\rem
+The meta-tree is not needed for the actual operation of the algorithm --- it suffices
+to keep its leaves in the heap.
+
\paran{Arbitrary weights}%
While the assumption that the weights of all spanning trees are distinct has helped us
in thinking about the problem, we should not forget that it is somewhat unrealistic.
increase $w(e_i)$ by~$\delta_i = \delta/2^{i+1}$. The cost of every spanning tree
has increased by at most $\sum_i\delta_i < \delta/2$, so if $T$~was lighter
than~$T'$, it still is. On the other hand, no two trees share the same
-weight difference, so all tree weights are now distinct.
+weight adjustment, so all tree weights are now distinct.
The exact value of~$\delta$ is not easy to calculate, but closer inspection of the algorithm
reveals that it is not needed at all. The only place where the edge weights are examined
case in most applications) by the reduction of Eppstein \cite{eppstein:ksmallest}.
We will observe that there are many edges of~$T_1$
which are guaranteed to be contained in $T_2,\ldots,T_K$ as well, and likewise there are
-many edges of $G\setminus T_1$ which are excluded from those spanning trees.
+many edges of $G\setminus T_1$ which are excluded from all those spanning trees.
The idea is the following (again assuming that the tree weights are distinct):
\defn
For an~edge $e\in T_1$, we define its \df{gain} $g(e)$ as the minimum weight gained by exchanging~$e$
for another edge. Similarly, we define the gain $G(f)$ for $f\not\in T_1$. Put formally:
$$\eqalign{
-g(e) &:= \min\{ w(f)-w(e) \mid f\in E, e\in T[f] \} \cr
+g(e) &:= \min\{ w(f)-w(e) \mid f\in E, e\in T[f] \}, \cr
G(f) &:= \min\{ w(f)-w(e) \mid e\in T[f] \}.\cr
}$$
The best exchanges in~$T_1$ involving $t_1,\ldots,t_{K-1}$ produce~$K-1$ spanning trees
of increasing weights. Any exchange involving $t_K,\ldots,t_n$ produces a~tree
which is heavier or equal than all those trees. (We are ascertained by the Monotone exchange lemma
-that the gain of such exchanges cannot be reverted by any later exchanges.)
+that the gain of such exchanges need not be reverted by any later exchanges.)
\qed
\lemma\id{gainb}%
Another nice application of Theorem \ref{kbestthm} is finding all minimum spanning
trees in a~graph that does not have distinct edge weights. We find a~single MST using
any of the algorithms of the previous chapters and then we use the enumeration algorithm
-of this section to find further spanning trees as long as their weights are minimum.
+of this section to find further spanning trees as long as their weights are equal to the minimum.
We can even use the reduction of the number of edges from Lemmata \ref{gaina} and \ref{gainb}:
we start with some fixed~$K$ and when we exhaust all~$K$ trees, we double~$K$ and restart
-the whole process. The extra time spent on these restarts is bounded by the time of the
+the whole process. The extra time spent on these restarts is dominated by the time of the
final pass.
This finally settles the question that we have asked ourselves in Section \ref{mstbasics},
-namely whether we lose anything by assuming that all weights are distinct and searching
-for the single minimum tree.
+namely whether we lose anything by assuming that all weights are distinct and by searching
+for just the single minimum tree.
\endpart