For every edge~$e=uv$, we consider the set $Q_e$ of all query paths containing~$e$.
The vertex of a~path, that is closer to the root, will be called the \df{top} of the path,
the other vertex its \df{bottom.}
-We define arrays $T_e$ and~$P_e$ as follows: $T_e$ contains
+We define arrays $T_e$ and~$P_e$ as follows: $T_e$~contains
the tops of the paths in~$Q_e$ in order of their increasing depth (we
will call them \df{active tops} and each of them will be stored exactly once). For
each active top~$t=T_e[i]$, we define $P_e[i]$ as the peak of the path $T[v,t]$.
comparisons. Since we (as always) assume that~$G$ is connected, $\O(m+n)=\O(m)$.
\qed
-\paran{Applications}%
+\paran{Other applications}%
The problem of computing path maxima or minima in a~weighted tree has several other interesting
applications. One of them is computing minimum cuts separating given pairs of vertices in a~given
weighted undirected graph~$G$. We construct a~Gomory-Hu tree~$T$ for the graph as described in \cite{gomoryhu}
\qed
\lemma\id{verfh}%
-The procedure \<FindPeaks> processes an~edge~$e$ in time $\O(\log \vert T_e\vert + q_e)$,
+\<FindPeaks> processes an~edge~$e$ in time $\O(\log \vert T_e\vert + q_e)$,
where $q_e$~is the number of query paths having~$e$ as its bottom edge.
\proof
\qeditem
\endlist
-\>We are now ready to combine these steps and get the following theorem:
+\>We now have all the necessary ingredients to prove the following theorem
+and thus conclude this section:
\thmn{Verification of MST on the RAM}\id{ramverify}%
There is a~RAM algorithm which for every weighted graph~$G$ and its spanning tree~$T$
of the plain contractive algorithm, while the average case is linear.
\lemma
-For every subproblem~$G_v$, the KKT algorithm runs in time $\O(m_v+n_v)$ plus the time
-spent on the recursive calls.
+For every subproblem~$G_v$, the KKT algorithm spends $\O(m_v+n_v)$ time plus the cost
+of the recursive calls.
\proof
We know from Lemma \ref{contiter} that each Bor\o{u}vka step takes time $\O(m_v+n_v)$.\foot{We
-add $n_v$ as the graph could be disconnected.}
+need to add $n_v$, because the graph could be disconnected.}
The selection of the edges of~$H_v$ is straightforward.
Finding the $F_v$-heavy edges is not, but we have already shown in Theorem \ref{ramverify}
that linear time is sufficient on the RAM.
$\widetilde\O(f) = \O(f\cdot\log^{\O(1)} f)$ and $\poly(n)=n^{\O(1)}$.}
queries to a~data structure containing the points. The data structure is expected
to answer orthogonal range queries and cone approximate nearest neighbor queries.
-There is also a~$\widetilde\O(n\cdot \poly(1/\varepsilon))$ time approximation
+There is also an~$\widetilde\O(n\cdot \poly(1/\varepsilon))$ time approximation
algorithm for the MST weight in arbitrary metric spaces by Czumaj and Sohler \cite{czumaj:metric}.
(This is still sub-linear since the corresponding graph has roughly $n^2$ edges.)
of~$F$ that contains both endpoints of the edge~$e$. When we remove~$f$ from~$F$, this tree falls
apart to two components $T_1$ and~$T_2$. The edge~$f$ was the lightest in the cut~$\delta_G(T_1)$
and $e$~is lighter than~$f$, so $e$~is the lightest in~$\delta_{G'}(T_1)$ and hence $e\in F'$.
+\looseness=1 %%HACK%%
A~\<Delete> of an~edge that is not contained in~$F$ does not change~$F$. When we delete
an~MSF edge, we have to reconnect~$F$ by choosing the lightest edge of the cut separating
(see for example \cite{clrs}) that the depth of a~tree with $n$~leaves is always $\O(\log_a n)$
and that all basic operations including insertion, deletion, search, splitting and joining the trees
run in time $\O(b\log_a n)$ in the worst case.
+\looseness=-1 %%HACK%%
We will use the ET-trees to maintain a~spanning forest of the dynamic graph. The auxiliary data of
each vertex will hold a~list of edges incident with the given vertex, that do not lie in the
\corn{Weighted ET-trees}\id{wtet}%
In weighted ET-trees, the operations \<InsertNontree> and \<DeleteNontree> have time
-complexity $\O(a\log_a n)$. All other operations take the same time as in Theorem
+complexity $\O(a\log_a n)$. All other operations take the same time as indicated by Theorem
\ref{etthm}.
-
%--------------------------------------------------------------------------------
\section{Dynamic connectivity}
increases.
To bring the complexity of the operation \<Connected> from $\O(\log n)$ down to $\O(\log n/\log\log n)$,
-we apply the trick from Example \ref{accel} and store~$F_0$ in a~ET-tree with $a=\max(\lfloor\log n\rfloor,2)$.
+we apply the trick from Example \ref{accel} and store~$F_0$ in an~ET-tree with $a=\max(\lfloor\log n\rfloor,2)$.
This does not hurt the complexity of insertions and deletions, but allows for faster queries.
\qed
The best exchanges in~$T_1$ involving $t_1,\ldots,t_{K-1}$ produce~$K-1$ spanning trees
of increasing weights. Any exchange involving $t_K,\ldots,t_n$ produces a~tree
which is heavier or equal than all those trees. (We are ascertained by the Monotone exchange lemma
-that the gain of such exchanges need not be reverted by any later exchanges.)
+that the gain of such exchanges need not be reverted later.)
\qed
\lemma\id{gainb}%
Ackermann's function is rarely defined the same way twice. We would not presume to buck
such a~well-established precedent. Here is a~slight variant.''}
We will use the definition by double recursion given by Tarjan \cite{tarjan:setunion},
-which is predominant in the literature on graph algorithms:
+which is predominant in the literature on graph algorithms.
\defn\id{ackerdef}%
The \df{Ackermann's function} $A(x,y)$ is a~function on non-negative integers defined as follows:
&= A(2,A(2,4)) = 2\tower(2\tower 4) = 2\tower 65536. \cr
}$$
-\para
-Three functions related to the inverse of the function~$A$ are usually considered:
+\paran{Inverses}%
+As common for functions of multiple parameters, there is no single function which
+could claim the title of the only true Inverse Ackermann's function.
+The following three functions related to the inverse of the function~$A$ are often considered:
\defn\id{ackerinv}%
The \df{$i$-th row inverse} $\lambda_i(n)$ of the Ackermann's function is defined by:
their values (the values are however never lowered). This allows for
a~trade-off between accuracy and speed, controlled by a~parameter~$\varepsilon$.
The heap operations take $\O(\log(1/\varepsilon))$ amortized time and at every
-moment at most~$\varepsilon n$ elements of the~$n$ elements inserted can be
+moment at most~$\varepsilon n$ elements of the $n$~elements inserted can be
corrupted.
\defnn{Soft heap interface}%
son's list to its parent. Otherwise, we exchange the sons and move the list from the
new left son to the parent. This way we obey the heap order and at the same time we keep
the white left son free of items.
+\looseness=1 %%HACK%%
Occasionally, we repeat this process once again and we concatenate the resulting lists
(we append the latter list to the former, using the smaller of the two \<ckey>s). This
It will turn out that we have enough time to always walk the leftmost path
completely, so no explicit counters are needed.
-Let us translate these ideas to real (pseudo)code:
+%%HACK%%
+\>Let us translate these ideas to real (pseudo)code:
\algn{Deleting the minimum item from a~soft heap}
\algo
Before we estimate the time spent on deletions, we analyse the refills.
\lemma
-Every invocation of the \<Refill> procedure takes time $\O(1)$ amortized.
+Every invocation of \<Refill> takes time $\O(1)$ amortized.
\proof
When \<Refill> is called from the \<DeleteMin> operation, it recurses on a~subtree of the
that have created them.
\qed
-It remains to take care of the calculation with ranks:
+%%HACK%%
+\>It remains to take care of the calculation with ranks:
\lemma\id{shyards}%
Every manipulation with ranks performed by the soft heap operations can be
and the sum of these differences telescopes, again to the real cost of the meld.
\qed
-Now we can put the bits together and laurel our effort with the following theorem:
+We can put the bits together now and laurel our effort with the following theorem:
\thmn{Performance of soft heaps, Chazelle \cite{chazelle:softheap}}\id{softheap}%
A~soft heap with error rate~$\varepsilon$ ($0<\varepsilon\le 1/2$) processes
When $c$ (the vertex to which we have contracted~$C$) is outside this cycle, we are done.
Otherwise we observe that the edges $e,f$ adjacent to~$c$ on this cycle cannot be corrupted
(they would be in~$R^C$ otherwise, which is impossible). By contractibility of~$C$ there exists
-a~path~$P$ in~$C\crpt (R\cap C)$ such that all edges of~$P$ are lighter than $e$ or~$f$ and hence
+a~path~$P$ in~$C\crpt (R\cap C)$ such that all edges of~$P$ are lighter than $e$~or~$f$ and hence
also than~$g$. The weights of the edges of~$P$ in the original graph~$G$ cannot be higher than
in~$G\crpt R$, so the path~$P$ is lighter than~$g$ even in~$G$ and we can again perform the
trick with expanding the vertex~$c$ to~$P$ in the cycle~$C$. Hence $g\not\in\mst(G)$.
union of these MSF's and add the corrupted edges. According to the previous lemma, this does not produce
the MSF of~$G$, but a~sparser graph containing it, on which we can continue.
-We can formulate the exact partitioning algorithm and its properties as follows:
+%%HACK%%
+\>We can formulate the exact partitioning algorithm and its properties as follows:
\algn{Partition a~graph to a~collection of contractible clusters}\id{partition}%
\algo
through all possible permutations of edges, each time calculating the MSF using any
of the known algorithms and comparing it with the result given by the decision tree.
The number of permutations does not exceed $(n^2)! \le (n^2)^{n^2} \le n^{2n^2} \le 2^{n^3}$
-and each permutation can be checked in time $\O(\poly(n))$.
+and each one can be checked in time $\O(\poly(n))$.
On the Pointer Machine, trees and permutations can be certainly enumerated in time
$\O(\poly(n))$ per object. The time complexity of the whole algorithm is therefore
rank of the lexicographically maximum such permutation.
It therefore remains to show that we can find the lexicographically maximum
permutation permitted by~$G$ in polynomial time.
+\looseness=1 %%HACK%%
We can determine $\pi[1]$ by trying all the possible values permitted by~$G$
in decreasing order and stopping as soon as we find~$\pi[1]$ which can be
by deleting vertices), ranking is easy as well. The key will be once again
a~recursive structure, similar to the one we have seen in the case of plain
permutations in \ref{permrec}.
+\looseness=1 %%HACK%%
\nota\id{restnota}%
As we will work with permutations on different sets simultaneously, we have
This translates to the following counterparts of algorithms \ref{rankalg}
and \ref{unrankalg}:
+\goodbreak %%HACK%%
\alg\id{rrankalg}%
$\<Rank>(\pi,i,n,A,M)$: Compute the lexicographic rank of a~permutation $\pi[i\ldots n]\in{\cal P}_{A,M}$.
and $\O(n^2\cdot t(n))$ by the computations of the~$N_0$'s.
\qed
-\rem
-In cases where the efficient evaluation of the permanent is out of our reach,
+\paran{Approximation}%
+In cases where efficient evaluation of the permanent is out of our reach,
we can consider using the fully-polynomial randomized approximation scheme
for the permanent described by Jerrum, Sinclair and Vigoda \cite{jerrum:permanent}.
-Then we get an~approximation scheme for the ranks.
+They have described a~randomized algorithm that for every $\varepsilon>0$
+approximates the value of the permanent of an~$n\times n$ matrix with non-negative
+entries. The output is within relative error~$\varepsilon$ from the correct value with
+probability at least~$1/2$ and the algorithm runs in time polynomial in~$n$ and~$1/\varepsilon$.
+From this, we can get a~similar approximation scheme for the ranks.
-\rem
+\paran{Special restriction graphs}%
There are also deterministic algorithms for computing the number of perfect matchings
in various special graph families (which imply polynomial-time ranking algorithms for
the corresponding families of permutations). If the graph is planar, we can