lemma makes the reason clear:
\lemma\id{ijphase}%
-The $i$-th phase of the Iterated Jarn\'\i{}k's algorithm runs in time~$\O(m)$.
+Each phase of the Iterated Jarn\'\i{}k's algorithm runs in time~$\O(m)$.
\proof
-During the phase, the heap always contains at most~$t_i$ elements, so it takes
+During the $i$-th phase, the heap always contains at most~$t_i$ elements, so it takes
time~$\O(\log t_i)=\O(m/n_i)$ to delete an~element from the heap. The trees~$R_i^j$
are edge-disjoint, so there are at most~$n_i$ \<DeleteMin>'s over the course of the phase.
Each edge is considered at most twice (once per its endpoint), so the number
the edges of~$P_e$ must have non-increasing weights, that is $w(P_e[i+1]) \le
w(P_e[i])$.
-\alg $\<FindPeaks>(u,p,T_p,P_p)$ --- process all queries in the subtree rooted
+\alg $\<FindPeaks>(u,p,T_p,P_p)$ --- process all queries located in the subtree rooted
at~$u$ entered from its parent via an~edge~$p$.
\id{findpeaks}
\proof
Implement the Koml\'os's algorithm from Theorem \ref{verify} with the data
structures developed in this section.
-According to Lemma \ref{verfh}, it runs in time $\sum_e \O(\log\vert T_e\vert + q_e)
+According to Lemma \ref{verfh}, the algorithm runs in time $\sum_e \O(\log\vert T_e\vert + q_e)
= \O(\sum_e \log\vert T_e\vert) + \O(\sum_e q_e)$. The second sum is $\O(m)$
as there are $\O(1)$ query paths per edge, the first sum is $\O(\#\hbox{comparisons})$,
which is $\O(m)$ by Theorem \ref{verify}.
frequently arise in practice.
\paran{Graphs with sorted edges}
-When the edges are already sorted by their weights, we can use the Kruskal's
+When the edges of the given graph are already sorted by their weights, we can use the Kruskal's
algorithm to find the MST in time $\O(m\timesalpha(n))$ (Theorem \ref{kruskal}).
We however can do better: As the minimality of a~spanning tree depends only on the
order of weights and not on the actual values (The Minimality Theorem, \ref{mstthm}), we can
If there was only a~single occurrence of~$v$, then $v$~was a~leaf and thus the sequence
transforms from $AuvuC$ to $AuC$ and $v$~alone.
-\em{Changing the root} of the tree~$T$ from~$v$ to~$w$ changes the sequence from $vAwBwCv$ to $wBwCvAw$.
+\em{Changing the root} of the tree~$T$ from~$v$ to~$w$ changes its ET-sequence from $vAwBwCv$ to $wBwCvAw$.
If $w$~was a~leaf, the sequence changes from $vAwCv$ to $wCvAw$. If $vw$ was the only edge of~$T$,
the sequence $vw$ becomes $wv$. Note that this works regardless of the possible presence of~$w$ inside~$B$.
\::Insert $f$ to~$F_0,\ldots,F_{\ell(f)}$.
\endalgo
-\algn{$\<Replace>(uv,i)$ -- Search for an~replacement edge for~$uv$ at level~$i$}
+\algn{$\<Replace>(uv,i)$ -- Search for replacement for edge~$uv$ at level~$i$}
\algo
\algin An~edge~$uv$ to replace and a~level~$i$ such that there is no replacement
at levels greater than~$i$.
\:Enumerate non-tree edges incident with vertices of~$T_1$ and stored in ${\cal E}_i$.
For each edge~$xy$, $x\in T_1$, do:
\::If $y\in T_2$, remove~$xy$ from~${\cal E}_i$ and return it to the caller.
-\::Otherwise increase $\ell(xy)$ by one. This includes deleting~$xy$ from~${\cal E}_i$
- and inserting it to~${\cal E}_{i+1}$.
+\::Otherwise increase $\ell(xy)$ by one.
+ \hfil\break
+ This includes deleting~$xy$ from~${\cal E}_i$ and inserting it to~${\cal E}_{i+1}$.
\:If $i>0$, call $\<Replace>(xy,i-1)$.
\:Otherwise return \<null>.
\algout The replacement edge.
\<Replace> costs $\O(\log n)$ per call, the rest is paid for by the edge level
increases.
-To bring the complexity of \<Connected> from $\O(\log n)$ down to $\O(\log n/\log\log n)$,
+To bring the complexity of the operation \<Connected> from $\O(\log n)$ down to $\O(\log n/\log\log n)$,
we apply the trick from Example \ref{accel} and store~$F_0$ in a~ET-tree with $a=\max(\lfloor\log n\rfloor,2)$.
This does not hurt the complexity of insertions and deletions, but allows for faster queries.
\qed
The decremental MSF algorithm can be turned to a~fully dynamic one by a~blackbox
reduction whose properties are summarized in the following theorem:
-\thmn{MSF dynamization, Henzinger et al.~\cite{henzinger:twoec}, Holm et al.~\cite{holm:polylog}}
+\thmn{MSF dynamization, Holm et al.~\cite{holm:polylog}}
Suppose that we have a~decremental MSF algorithm with the following properties:
\numlist\ndotted
\:For any $a$,~$b$, it can be initialized on a~graph with~$a$ vertices and~$b$ edges.
sought among the replacement edges found in these structures. The unused replacement edges then have
to be reinserted back to the structure.
-The original reduction of Henzinger et al.~handles these reinserts by a~mechanism of batch insertions
+The original reduction of Henzinger et al.~\cite{henzinger:twoec} handles these reinserts by a~mechanism of batch insertions
supported by their decremental structure, which is not available in our case. Holm et al.~have
replaced it by a~system of auxiliary edges inserted at various places in the structure.
We refer to the article \cite{holm:polylog} for details.
isomorphism preserves the relative order of weights, the isomorphism applies to their MST's as well:
\defn
-A~\df{monotone isomorphism} of two weighted graphs $G_1=(V_1,E_1,w_1)$ and
+A~\df{monotone isomorphism} between two weighted graphs $G_1=(V_1,E_1,w_1)$ and
$G_2=(V_2,E_2,w_2)$ is a bijection $\pi: V_1\rightarrow V_2$ such that
for each $u,v\in V_1: uv\in E_1 \Leftrightarrow \pi(u)\pi(v)\in E_2$ and
for each $e,f\in E_1: w_1(e)<w_1(f) \Leftrightarrow w_2(\pi[e]) < w_2(\pi[f])$.
\lemman{MST of isomorphic graphs}\id{mstiso}%
Let~$G_1$ and $G_2$ be two weighted graphs with distinct edge weights and $\pi$
-their monotone isomorphism. Then $\mst(G_2) = \pi[\mst(G_1)]$.
+a~monotone isomorphism between them. Then $\mst(G_2) = \pi[\mst(G_1)]$.
\proof
The isomorphism~$\pi$ maps spanning trees onto spanning trees and it preserves
It remains to show that adding the edges simultaneously does not
produce a cycle. Consider the first iteration of the algorithm where $T$ contains a~cycle~$C$. Without
-loss of generality we can assume that $C=T_1[u_1v_1]\,v_1u_2\,T_2[u_2v_2]\,v_2u_3\,T_3[u_3v_3]\, \ldots \,T_k[u_kv_k]\,v_ku_1$.
+loss of generality we can assume that:
+$$C=T_1[u_1v_1]\,v_1u_2\,T_2[u_2v_2]\,v_2u_3\,T_3[u_3v_3]\, \ldots \,T_k[u_kv_k]\,v_ku_1.$$
Each component $T_i$ has chosen its lightest incident edge~$e_i$ as either the edge $v_iu_{i+1}$
or $v_{i-1}u_i$ (indexing cyclically). Suppose that $e_1=v_1u_2$ (otherwise we reverse the orientation
of the cycle). Then $e_2=v_2u_3$ and $w(e_2)<w(e_1)$ and we can continue in the same way,
From this, we can conclude:
\thm
-Jarn\'\i{}k's algorithm finds the MST of a~given graph in time $\O(m\log n)$.
+Jarn\'\i{}k's algorithm computes the MST of a~given graph in time $\O(m\log n)$.
\rem
We will show several faster implementations in section \ref{iteralg}.
\qed
\impl
-Except for the initial sorting, which in general takes $\Theta(m\log m)$ time, the only
+Except for the initial sorting, which in general requires $\Theta(m\log m)$ time, the only
other non-trivial operation is the detection of cycles. What we need is a~data structure
for maintaining connected components, which supports queries and edge insertion.
This is closely related to the well-known Disjoint Set Union problem:
In the previous algorithm, the role of the mapping~$\pi^{-1}$ is of course played by the edge labels~$\ell$.
\paran{A~lower bound}%
-Finally, we will show a family of graphs where the $\O(m\log n)$ bound on time complexity
+Finally, we will show a family of graphs for which the $\O(m\log n)$ bound on time complexity
is tight. The graphs do not have unique weights, but they are constructed in a way that
the algorithm never compares two edges with the same weight. Therefore, when two such
graphs are monotonically isomorphic (see~\ref{mstiso}), the algorithm processes them in the same way.
$\alpha(m,n) \le \alpha(n)+1$.
\proof
+We know that
$A(x,4\lceil m/n\rceil) \ge A(x,4) = A(x-1,A(x,3)) \ge A(x-1,x-1)$, so $A(x,4\lceil m/n\rceil)$
rises above $\log n$ no later than $A(x-1,x-1)$ does.
\qed
Traditionally, computer scientists use a~variety of computational models
as a~formalism in which their algorithms are stated. If we were studying
-NP-completeness, we could safely assume that all the models are equivalent,
+NP-complete\-ness, we could safely assume that all the models are equivalent,
possibly up to polynomial slowdown which is negligible. In our case, the
differences between good and not-so-good algorithms are on a~much smaller
scale. In this chapter, we will replace the usual ``tape measure'' by a~micrometer,
\:Both restrictions at once.
\endlist
-Thorup discusses the usual techniques employed by RAM algorithms in~\cite{thorup:aczero}
+Thorup \cite{thorup:aczero} discusses the usual techniques employed by RAM algorithms
and he shows that they work on both Word-RAM and ${\rm AC}^0$-RAM, but the combination
of the two restrictions is too weak. On the other hand, the intersection of~${\rm AC}^0$
with the instruction set of modern processors is already strong enough (e.g., when we
the \df{actual graphs} in~$\C$ to the generic graphs in~$\cal G$. This gives us the following
theorem:
-\thmn{Batched topological computations, Buchsbaum et al.~\cite{buchsbaum:verify}}\id{topothm}%
+\thmn{Topological computations, Buchsbaum et al.~\cite{buchsbaum:verify}}\id{topothm}%
Suppose that we have a~topological graph computation~$\cal T$ that can be performed in time
$T(k)$ for graphs on $k$~vertices. Then we can run~$\cal T$ on a~collection~$\C$
of labeled graphs on~$k$ vertices in time $\O(\Vert\C\Vert + (k+s)^{k(k+2)}\cdot (T(k)+k^2))$,
of the trie which is compact enough.
\lemma\id{citree}%
-The trie is uniquely determined by the order of the guides~$g_1,\ldots,g_{n-1}$.
+The compressed trie is uniquely determined by the order of the guides~$g_1,\ldots,g_{n-1}$.
\proof
We already know that the letter depths of the trie vertices are exactly
once, so the rank of $\pi$ is $(\pi[1]-1)\cdot (n-1)!$ plus the rank of
$\pi[2\ldots n]$.
-This gives us a~reduction from (un)ranking of permutations on $[n]$ to (un)ranking
+This gives us a~reduction from (un)ranking of permutations on $[n]$ to (un)rank\-ing
of permutations on a $(n-1)$-element set, which suggests a straightforward
algorithm, but unfortunately this set is different from $[n-1]$ and it even
depends on the value of~$\pi[1]$. We could renumber the elements to get $[n-1]$,
\:Return~$\pi$.
\endalgo
-\>We can call $\<Unrank>(j,1,n,[n])$ for the unranking problem on~$[n]$, i.e., to get $L^{-1}(j,[n])$.
+\>We can call $\<Unrank>(j,1,n,[n])$ for the unranking problem on~$[n]$, i.e., to calculate $L^{-1}(j,[n])$.
\para
The most time-consuming parts of the above algorithms are of course operations
machine has to be large as well:
\obs
-$\log n! = \Theta(n\log n)$, therefore the word size~$W$ must be~$\Omega(n\log n)$.
+$\log n! = \Theta(n\log n)$, therefore the word size must be~$\Omega(n\log n)$.
\proof
We have $n^n \ge n! \ge \lfloor n/2\rfloor^{\lfloor n/2\rfloor}$, so $n\log n \ge \log n! \ge \lfloor n/2\rfloor\cdot\log \lfloor n/2\rfloor$.
a~set of restrictions which is a~part of the input, then $P=\#P$.
\proof
-We will show that a~polynomial-time ranking algorithm would imply a~polynomial-time
+We will show that a~polynomial-time ranking algorithm would imply a~po\-ly\-nom\-ial-time
algorithm for computing the permanent of an~arbitrary zero-one matrix, which
is a~$\#P$-complete problem.