This does not hurt the complexity of insertions and deletions, but allows for faster queries.
\qed
-\rem
+\rem\id{dclower}%
An~$\Omega(\log n/\log\log n)$ lower bound for the amortized complexity of the dynamic connectivity
problem has been proven by Henzinger and Fredman \cite{henzinger:lowerbounds} in the cell
probe model with $\O(\log n)$-bit words. Thorup has answered by a~faster algorithm
We refer to the article \cite{holm:polylog} for details.
\qed
-\corn{Fully dynamic MSF}
+\corn{Fully dynamic MSF}\id{dynmsfcorr}%
There is a~fully dynamic MSF algorithm that works in time $\O(\log^4 n)$ amortized
per operation for graphs on $n$~vertices.
\chapter{Epilogue}
+We have seen the many facets of the minimum spanning tree problem. It has
+turned out that while the major question of the existence of a~linear-time
+MST algorithm is still open, backing off a~little bit in an~almost arbitrary
+direction leads to a~linear solution. This includes classes of graphs with edge
+density at least $\lambda_k(n)$ for an~arbitrary fixed~$k$ (Corollary \ref{lambdacor}),
+minor-closed classes (Theorem \ref{mstmcc}), and graphs whose edge weights are
+integers (Theorem \ref{intmst}). Using randomness also helps (Theorem \ref{kktavg}),
+as does having the edges pre-sorted (Example \ref{sortededges}).
+
+If we do not know anything about the structure of the graph and we are only allowed
+to compare the edge weights, we can use the Pettie's MST algorithm (Theorem
+\ref{optthm}). Its time complexity is guaranteed to be asymptotically optimal,
+but we do not know what it really is --- the best what we have is
+an~$\O(m\timesalpha(m,n))$ upper bound and the trivial $\Omega(m)$ lower bound.
+
+One thing we however know for sure. The algorithm runs on the weakest of our
+computational models ---the Pointer Machine--- and its complexity is linear
+in the minimum number of comparisons needed to decide the problem. We therefore
+need not worry about the details of computational models, which have contributed
+so much to the linear-time algorithms for our special cases. Instead, it is sufficient
+to study the complexity of MST decision trees. However, aside from the properties
+mentioned in Section \ref{dtsect}, not much is known about these trees so far.
+
+As for the dynamic algorithms, we have an~algorithm which maintains the minimum
+spanning forest within poly-logarithmic time per operation (Corollary \ref{dynmsfcorr}).
+The optimum complexity is once again undecided --- the known lower bounds are very far
+from the upper ones.
+The known algorithms run on the Pointer machine and we do not know if using a~stronger
+model can help.
+
+For the ranking problems, the situation is completely different. We have shown
+linear-time algorithms for three important problems of this kind. The techniques,
+which we have used, seem to be applicable to other ranking problems. On the other
+hand, ranking of general restricted permutations has turned out to balance on the
+verge of $\#P$-completeness (Theorem \ref{pcomplete}). All our algorithms run
+on the RAM model, which seems to be the only sensible choice for problems of
+inherently arithmetic nature.
+
+Aside from the concrete problems we have solved, we have also built several algorithmic
+techniques of general interest: the unification procedures using pointer-based
+bucket sorting (Section \ref{bucketsort}) and the vector computations on the RAM
+(Section \ref{bitsect}). We hope that they will be useful in many other algorithms.
+
\endpart
%--------------------------------------------------------------------------------
-\section{Decision trees}
+\section{Decision trees}\id{dtsect}%
The Pettie's and Ramachandran's algorithm combines the idea of robust partitioning with optimal decision
trees constructed by brute force for very small subgraphs. In this section, we will
bounds the number of comparisons. Using any of these results, we can prove an~Ackermannian
upper bound on the optimal algorithm:
-\thmn{Upper bound on complexity of the Optimal algorithm}
+\thmn{Upper bound on complexity of the Optimal algorithm}\id{optthm}%
The time complexity of the Optimal MST algorithm is $\O(m\timesalpha(m,n))$.
\proof
Similarly to the Iterated Jarn\'\i{}k's algorithm, this bound is actually linear for classes of graphs
that do not have density extremely close to constant:
-\cor
-The Optimal MST algorithm runs in linear time whenever $m\ge n\cdot a(k,n)$ for any fixed~$k$.
+\cor\id{lambdacor}%
+The Optimal MST algorithm runs in linear time whenever $m\ge n\cdot \lambda_k(n)$ for any fixed~$k$.
\proof
Combine the previous theorem with Lemma \ref{alphaconst}.