2 \usepackage[latin2]{inputenc}
5 \title[Graph Algorithms]{Graph Algorithms\\Spanning Trees and Ranking}
6 \author[Martin Mare¹]{Martin Mare¹\\\texttt{mares@kam.mff.cuni.cz}}
7 \institute{Department of Applied Mathematics\\MFF UK Praha}
10 \setbeamertemplate{navigation symbols}{}
11 \setbeamerfont{title page}{family=\rmfamily}
13 \def\[#1]{~{\color{violet} [#1]}}
20 \begin{frame}{The Minimum Spanning Tree Problem}
22 {\bf 1. Minimum Spanning Tree Problem:}
25 \item Given a~weighted undirected graph,\\
26 what is its lightest spanning tree?
27 \item In fact, a~linear order on edges is sufficient.
28 \item Efficient solutions are very old \[Borùvka 1926]
29 \item A~long progression of faster and faster algorithms.
30 \item Currently very close to linear time, but still not there.
35 \begin{frame}{The Ranking Problems}
37 {\bf 2. Ranking of Combinatorial Structures:}
40 \item We are given a~set~$C$ of objects with a~linear order $\prec$.
41 \item {\bf Ranking function $R_\prec(x)$:} how many objects precede~$x$?
42 \item {\bf Unranking function $R^{-1}_\prec(i)$:} what is the $i$-th object?
48 $C=\{0,1\}^n$ with lexicographic order
51 $R$ = conversion from binary\\
52 $R^{-1}$ = conversion to binary
56 \begin{example}[a~real one]
57 $C=$ set of all permutations on $\{1,\ldots,n\}$
61 How to compute the (un)ranking function efficiently?
63 For permutations, an $\O(n\log n)$ algorithm was known \[folklore].
65 We will show how to do that in $\O(n)$.
69 \begin{frame}{Models of computation: RAM}
71 As we approach linear time, we must specify the model.
75 {\bf 1. The Random Access Machine (RAM):}
78 \item Works with integers
79 \item Memory: an array of integers indexed by integers
84 Many variants exist, we will use the {\bf Word-RAM:}
87 \item Machine words of $W$ bits
88 \item The ``C operations'': arithmetics, bitwise logical op's
90 \item We know that $W\ge\log_2 \vert \hbox{input} \vert$
95 \begin{frame}{Models of computation: PM}
97 {\bf 2. The Pointer Machine (PM):}
100 \item Memory cells accessed via pointers
101 \item Each cell contains $\O(1)$ pointers and $\O(1)$ symbols
102 \item Operates only on pointers and symbols
107 \begin{beamerboxesrounded}[upper=block title example,shadow=true]{Key differences}
109 \item PM has no arrays, we can emulate them in $\O(\log n)$
110 \item PM has no arithmetics
112 \end{beamerboxesrounded}
116 We can emulate PM on RAM with constant slowdown.
118 Emulation of RAM on PM is more expensive.
122 \begin{frame}{PM Techniques}
124 {\bf Bucket Sorting} does not need arrays.
128 Interesting consequences:
131 \item Flattening of multigraphs in $\O(m+n)$
132 \item Unification of sequences in $\O(n+\sum_i\ell_i+\vert\Sigma\vert)$
133 \item (Sub)tree isomorphism in $\O(n)$ simplified \[M. 2008]
134 \item Batched graph computations \[Buchsbaum et al.~1998]
139 \begin{frame}{RAM Techniques}
141 We can use RAM as a vector machine:
146 \def\sep{\;{\color{brown}0}}
147 \def\seq{\;{\color{brown}1}}
148 \def\sez{\;{\color{red}0}}
149 We can encode the vector $(1,5,3,0)$ with 3-bit fields as:
151 \sep001\sep101\sep011\sep000
153 And then search for 3 by:
157 &\seq001\seq101\seq011\seq000 \\
158 {\sc xor} &\sep011\sep011\sep011\sep011 \\
160 &\seq010\seq110\seq000\seq011 \\
161 $-$ &\sep001\sep001\sep001\sep001 \\
163 &\seq001\seq101\sez111\seq010 \\
164 {\sc and} &\seq000\seq000\seq000\seq000 \\
166 &\seq000\seq000\sez000\seq000 \\
173 \begin{frame}{RAM Data Structures}
175 We can translate vector operations to $\O(1)$ RAM instructions
179 \dots\ as long as the vector fits in $\O(1)$ words.
183 We can build ``small'' data structures operating in $\O(1)$ time:
187 \item Ordered sets with ranking
188 \item ``Small'' heaps of ``large'' integers \[Fredman \& Willard 1990]
193 \begin{frame}{Minimum Spanning Trees}
194 Algorithms for Minimum Spanning Trees:
197 \item Classical algorithms \[Borùvka, Jarník-Prim, Kruskal]
198 \item Contractive: $\O(m\log n)$ using flattening on the PM \\
200 \item Iterated: $\O(m\,\beta(m,n))$ \[Fredman \& Tarjan~1987] \\
201 where $\beta(m,n) = \min\{ k: \log_2^{(k)} n \le m/n \}$
202 \item Even better: $\O(m\,\alpha(m,n))$ using {\it soft heaps}\\
203 \[Chazelle 1998, Pettie 1999]
204 \item MST verification: $\O(m)$ on RAM \[King 1997, M. 2008]
205 \item Randomized: $\O(m)$ w.h.p. \[Karger et al.~1995]
209 \begin{frame}{MST -- Special cases}
211 We have $\O(m)$ time algorithms for:
214 \item Planar graphs \[Tarjan 1976, Matsui 1995, M. 2004] (PM)
215 \item Minor-closed classes \[Tarjan 1983, M. 2004] (PM)
216 \item $\O(1)$ different weights \[folklore] (PM)
217 \item Integer weights \[Fredman \& Willard 1990] (RAM)
222 \begin{frame}{MST -- Optimality}
224 There is a~provably optimal comparison-based algorithm \\\[Pettie \& Ramachandran 2002]
228 However, there is a catch: \pause Nobody knows its complexity.
232 We know that it is $\O({\cal T}(m,n))$ where ${\cal T}(m,n)$ is the depth
233 of the optimum MST decision tree.
239 It runs on the PM, so we know that if there is a~linear-time algorithm,
240 it does not need any special RAM data structures. (They can however
246 \begin{frame}{MST -- Dynamic algorithms}
248 Sometimes, we need to find the MST of a~changing graph. \\
249 We insert/delete edges, the structure responds with $\O(1)$
250 modifications of the MST.
253 \item Unweighted cases similar to dynamic connectivity:
255 \item Incremental: $\O(\alpha(n))$ \[Tarjan 1975]
256 \item Fully dynamic: $\O(\log^2 n)$ \[Holm et al.~2001]
258 \item Weighted cases are harder:
260 \item Decremental: $\O(\log^2 n)$ \[Holm et al.~2001]
261 \item Fully dynamic: $\O(\log^4 n)$ \[Holm et al.~2001]
262 \item Only~$W$ weights: $\O(W\log^2 n)$ \[M. 2008]
264 \item $K$ best spanning trees:
266 \item Simple: $\O(T_{MST} + Km)$ \[Katoh et al.~1981, M.~2008]
267 \item Reduced: $\O(T_{MST} + \min(K^2, Km + K\log K))$ \[Eppst.~1992]
268 \item Large~$K$: $\O(T_{MST} + \min(K^{3/2}, Km^{1/2}))$ \[Frederickson 1997]
274 \begin{frame}{Back to Ranking}
276 Ranking of permutations on the RAM: \[M. \& Straka 2007]
279 \item Traditional algorithms represent a~subset of $\{1,\ldots,n\}$
280 \item The result can be $n!$ $\Rightarrow$ word size $=\Omega(n\log n)$ bits
281 \item We can represent the subsets as RAM vectors
282 \item This gives us an~$\O(n)$ time algorithm for (un)ranking
287 Easily extendable to $k$-permutations, also in $\O(n)$
291 \begin{frame}{Restricted permutations}
293 For restricted permutations (e.g., derangements): \[M. 2008]
296 \item Existence of permutation reduces to network flows
297 \item The ranking problem can be used to calculate permanents,\\
298 so it is $\#\rm P$-complete
299 \item However, this is the only obstacle. Calculating $\O(n)$
300 sub-permanents is sufficient.
301 \item For derangements, we have achieved $\O(n)$ time after $\O(n^2)$ time
307 \begin{frame}{My contributions}
310 \item The lower bound for the Contractive Borùvka's algorithm
311 \item A~simpler linear-time tree isomorphism algorithm.
312 \item A~linear-time algorithm for MST on minor-closed classes.
313 \item Corrected and simplified MST verification.
314 \item Ranking and unranking algorithms.