From: Martin Mares Date: Wed, 23 Apr 2008 15:39:52 +0000 (+0200) Subject: Added the Epilogue. X-Git-Tag: printed~41 X-Git-Url: http://mj.ucw.cz/gitweb/?a=commitdiff_plain;h=cec15eb125844e3faa82dbb8e7100e61fbf3b241;p=saga.git Added the Epilogue. --- diff --git a/adv.tex b/adv.tex index 41095c8..a573d97 100644 --- a/adv.tex +++ b/adv.tex @@ -1201,7 +1201,7 @@ compensated by the loss of edges by contraction and $m_\ell + m_r \le m_v$. So t number of edges per level does not decrease and it remains to apply the previous lemma. \qed -\thmn{Average-case complexity of the KKT algorithm} +\thmn{Average-case complexity of the KKT algorithm}\id{kktavg}% The expected time complexity of the KKT algorithm on the RAM is $\O(m)$. \proof diff --git a/appl.tex b/appl.tex index 13455b1..3f03df2 100644 --- a/appl.tex +++ b/appl.tex @@ -10,7 +10,7 @@ Towards the end of our story of the minimum spanning trees, we will now focus ou on various special cases of our problem and also to several related problems that frequently arise in practice. -\paran{Graphs with sorted edges} +\paran{Graphs with sorted edges}\id{sortededges}% When the edges of the given graph are already sorted by their weights, we can use the Kruskal's algorithm to find the MST in time $\O(m\timesalpha(n))$ (Theorem \ref{kruskal}). We however can do better: As the minimality of a~spanning tree depends only on the diff --git a/dyn.tex b/dyn.tex index f07103a..a914c80 100644 --- a/dyn.tex +++ b/dyn.tex @@ -503,7 +503,7 @@ we apply the trick from Example \ref{accel} and store~$F_0$ in a~ET-tree with $a This does not hurt the complexity of insertions and deletions, but allows for faster queries. \qed -\rem +\rem\id{dclower}% An~$\Omega(\log n/\log\log n)$ lower bound for the amortized complexity of the dynamic connectivity problem has been proven by Henzinger and Fredman \cite{henzinger:lowerbounds} in the cell probe model with $\O(\log n)$-bit words. Thorup has answered by a~faster algorithm @@ -643,7 +643,7 @@ replaced it by a~system of auxiliary edges inserted at various places in the str We refer to the article \cite{holm:polylog} for details. \qed -\corn{Fully dynamic MSF} +\corn{Fully dynamic MSF}\id{dynmsfcorr}% There is a~fully dynamic MSF algorithm that works in time $\O(\log^4 n)$ amortized per operation for graphs on $n$~vertices. diff --git a/epilog.tex b/epilog.tex index 0c1be4a..45111d7 100644 --- a/epilog.tex +++ b/epilog.tex @@ -4,4 +4,47 @@ \chapter{Epilogue} +We have seen the many facets of the minimum spanning tree problem. It has +turned out that while the major question of the existence of a~linear-time +MST algorithm is still open, backing off a~little bit in an~almost arbitrary +direction leads to a~linear solution. This includes classes of graphs with edge +density at least $\lambda_k(n)$ for an~arbitrary fixed~$k$ (Corollary \ref{lambdacor}), +minor-closed classes (Theorem \ref{mstmcc}), and graphs whose edge weights are +integers (Theorem \ref{intmst}). Using randomness also helps (Theorem \ref{kktavg}), +as does having the edges pre-sorted (Example \ref{sortededges}). + +If we do not know anything about the structure of the graph and we are only allowed +to compare the edge weights, we can use the Pettie's MST algorithm (Theorem +\ref{optthm}). Its time complexity is guaranteed to be asymptotically optimal, +but we do not know what it really is --- the best what we have is +an~$\O(m\timesalpha(m,n))$ upper bound and the trivial $\Omega(m)$ lower bound. + +One thing we however know for sure. The algorithm runs on the weakest of our +computational models ---the Pointer Machine--- and its complexity is linear +in the minimum number of comparisons needed to decide the problem. We therefore +need not worry about the details of computational models, which have contributed +so much to the linear-time algorithms for our special cases. Instead, it is sufficient +to study the complexity of MST decision trees. However, aside from the properties +mentioned in Section \ref{dtsect}, not much is known about these trees so far. + +As for the dynamic algorithms, we have an~algorithm which maintains the minimum +spanning forest within poly-logarithmic time per operation (Corollary \ref{dynmsfcorr}). +The optimum complexity is once again undecided --- the known lower bounds are very far +from the upper ones. +The known algorithms run on the Pointer machine and we do not know if using a~stronger +model can help. + +For the ranking problems, the situation is completely different. We have shown +linear-time algorithms for three important problems of this kind. The techniques, +which we have used, seem to be applicable to other ranking problems. On the other +hand, ranking of general restricted permutations has turned out to balance on the +verge of $\#P$-completeness (Theorem \ref{pcomplete}). All our algorithms run +on the RAM model, which seems to be the only sensible choice for problems of +inherently arithmetic nature. + +Aside from the concrete problems we have solved, we have also built several algorithmic +techniques of general interest: the unification procedures using pointer-based +bucket sorting (Section \ref{bucketsort}) and the vector computations on the RAM +(Section \ref{bitsect}). We hope that they will be useful in many other algorithms. + \endpart diff --git a/opt.tex b/opt.tex index 6d7e742..373d3e9 100644 --- a/opt.tex +++ b/opt.tex @@ -769,7 +769,7 @@ and additionally $\O(n)$ on identifying the live vertices. %-------------------------------------------------------------------------------- -\section{Decision trees} +\section{Decision trees}\id{dtsect}% The Pettie's and Ramachandran's algorithm combines the idea of robust partitioning with optimal decision trees constructed by brute force for very small subgraphs. In this section, we will @@ -1127,7 +1127,7 @@ having the optimal algorithm at hand, does not take care about the low-level det bounds the number of comparisons. Using any of these results, we can prove an~Ackermannian upper bound on the optimal algorithm: -\thmn{Upper bound on complexity of the Optimal algorithm} +\thmn{Upper bound on complexity of the Optimal algorithm}\id{optthm}% The time complexity of the Optimal MST algorithm is $\O(m\timesalpha(m,n))$. \proof @@ -1138,8 +1138,8 @@ or Pettie \cite{pettie:ackermann}. Similarly to the Iterated Jarn\'\i{}k's algorithm, this bound is actually linear for classes of graphs that do not have density extremely close to constant: -\cor -The Optimal MST algorithm runs in linear time whenever $m\ge n\cdot a(k,n)$ for any fixed~$k$. +\cor\id{lambdacor}% +The Optimal MST algorithm runs in linear time whenever $m\ge n\cdot \lambda_k(n)$ for any fixed~$k$. \proof Combine the previous theorem with Lemma \ref{alphaconst}. diff --git a/rank.tex b/rank.tex index 014d56d..27f93ab 100644 --- a/rank.tex +++ b/rank.tex @@ -443,7 +443,7 @@ for zero-one matrices (as proven by Valiant in \cite{valiant:permanent}). As a~ranking function for a~set of~matchings can be used to count all such matchings, we obtain the following theorem: -\thm +\thm\id{pcomplete}% If there is a~polynomial-time algorithm for lexicographic ranking of permutations with a~set of restrictions which is a~part of the input, then $P=\#P$.