2 \usepackage[latin2]{inputenc}
5 \title[Graph Algorithms]{Graph Algorithms\\Spanning Trees and Ranking}
6 \author[Martin Mare¹]{Martin Mare¹\\\texttt{mares@kam.mff.cuni.cz}}
7 \institute{Department of Applied Mathematics\\MFF UK Praha}
10 \setbeamertemplate{navigation symbols}{}
11 \setbeamerfont{title page}{family=\rmfamily}
13 \def\[#1]{\hskip0.3em{\color{violet} [#1]}}
20 \begin{frame}{The Minimum Spanning Tree Problem}
22 {\bf 1. Minimum Spanning Tree Problem:}
25 \item Given a~weighted undirected graph,\\
26 what is its lightest spanning tree?
27 \item In fact, a~linear order on edges is sufficient.
28 \item Efficient solutions are very old \[Borùvka 1926]
29 \item A~long progression of faster and faster algorithms.
30 \item Currently very close to linear time, but still not there.
35 \begin{frame}{The Ranking Problems}
37 {\bf 2. Ranking of Combinatorial Structures:}
40 \item We are given a~set~$C$ of objects with a~linear order $\prec$.
41 \item {\bf Ranking function $R_\prec(x)$:} how many objects precede~$x$?
42 \item {\bf Unranking function $R^{-1}_\prec(i)$:} what is the $i$-th object?
48 $C=\{0,1\}^n$ with lexicographic order
51 $R$ = conversion from binary\\
52 $R^{-1}$ = conversion to binary
56 \begin{example}[a~real one]
57 $C=$ set of all permutations on $\{1,\ldots,n\}$
61 How to compute the (un)ranking function efficiently?
63 For permutations, an $\O(n\log n)$ algorithm was known \[folklore].
65 We will show how to do that in $\O(n)$.
69 \begin{frame}{Models of computation: RAM}
71 As we approach linear time, we must specify the model.
75 {\bf 1. The Random Access Machine (RAM):}
78 \item Works with integers
79 \item Memory: an array of integers indexed by integers
85 Many variants exist, we will use the {\bf Word-RAM:}
88 \item Machine words of $W$ bits
89 \item The ``C operations'': arithmetics, bitwise logical op's
91 \item We know that $W\ge\log_2 \vert \hbox{input} \vert$
96 \begin{frame}{Models of computation: PM}
98 {\bf 2. The Pointer Machine (PM):}
101 \item Memory cells accessed via pointers
102 \item Each cell contains $\O(1)$ pointers and $\O(1)$ symbols
103 \item Operates only on pointers and symbols
109 \begin{beamerboxesrounded}[upper=block title example,shadow=true]{Key differences}
111 \item PM has no arrays, we can emulate them in $\O(\log n)$ time.
112 \item PM has no arithmetics.
114 \end{beamerboxesrounded}
118 We can emulate PM on RAM with constant slowdown.
120 Emulation of RAM on PM is more expensive.
124 \begin{frame}{PM Techniques}
126 {\bf Bucket Sorting} does not need arrays.
130 Interesting consequences:
133 \item Flattening of multigraphs in $\O(m+n)$
134 \item Unification of sequences in $\O(n+\sum_i\ell_i+\vert\Sigma\vert)$
135 \item (Sub)tree isomorphism in $\O(n)$ simplified \[M. 2008]
136 \item Batched graph computations \[Buchsbaum et al.~1998]
141 \begin{frame}{RAM Techniques}
143 We can use RAM as a vector machine:
147 \begin{example}[parallel search]
148 \def\sep{\;{\color{brown}0}}
149 \def\seq{\;{\color{brown}1}}
150 \def\sez{\;{\color{red}0}}
151 We can encode the vector $(1,5,3,0)$ with 3-bit fields as:
153 \sep001\sep101\sep011\sep000
155 And then search for 3 by:
159 &\seq001\seq101\seq011\seq000 & $(1,5,3,0)$ \\
160 {\sc xor} &\sep011\sep011\sep011\sep011 & $(3,3,3,3)$ \\
162 &\seq010\seq110\seq000\seq011 \\
163 $-$ &\sep001\sep001\sep001\sep001 & $(1,1,1,1)$ \\
165 &\seq001\seq101\sez111\seq010 \\
166 {\sc and} &\seq000\seq000\seq000\seq000 \\
168 &\seq000\seq000\sez000\seq000 \\
175 \begin{frame}{RAM Data Structures}
177 We can translate vector operations to $\O(1)$ RAM instructions
181 \dots\ as long as the vector fits in $\O(1)$ words.
185 We can build ``small'' data structures operating in $\O(1)$ time:
189 \item Ordered sets with ranking
190 \item ``Small'' heaps of ``large'' integers \[Fredman \& Willard 1990]
195 \begin{frame}{Minimum Spanning Trees}
196 Algorithms for Minimum Spanning Trees:
199 \item Classical algorithms \[Borùvka, Jarník-Prim, Kruskal]
200 \item Contractive: $\O(m\log n)$ using flattening on the PM \\
202 \item Iterated: $\O(m\,\beta(m,n))$ \[Fredman \& Tarjan~1987] \\
203 where $\beta(m,n) = \min\{ k: \log_2^{(k)} n \le m/n \}$
204 \item Even better: $\O(m\,\alpha(m,n))$ using {\it soft heaps}\hfil\break\[Chazelle 1998, Pettie 1999]
205 \item MST verification: $\O(m)$ on RAM \[King 1997, M. 2008]
206 \item Randomized: $\O(m)$ expected on RAM \[Karger et al.~1995]
210 \begin{frame}{MST -- Special cases}
212 Cases for which we have an $\O(m)$ algorithm:
216 Special graph structure:
219 \item Planar graphs \[Tarjan 1976, Matsui 1995, M. 2004] (PM)
220 \item Minor-closed classes \[Tarjan 1983, M. 2004] (PM)
221 \item Dense graphs (by many of the general PM algorithms)
227 Or we can assume more about weights:
230 \item $\O(1)$ different weights \[folklore] (PM)
231 \item Integer weights \[Fredman \& Willard 1990] (RAM)
232 \item Sorted weights (RAM)
237 \begin{frame}{MST -- Optimality}
239 There is a~provably optimal comparison-based algorithm \\\[Pettie \& Ramachandran 2002]
243 However, there is a catch\alt<2->{:}{ \dots} \pause Nobody knows its complexity.
247 We know that it is $\O({\cal T}(m,n))$ where ${\cal T}(m,n)$ is the depth
248 of the optimum MST decision tree. Any other algorithm provides an upper bound.
254 It runs on the PM, so we know that if there is a~linear-time algorithm,
255 it does not need any special RAM data structures. (They can however
261 \begin{frame}{MST -- Dynamic algorithms}
263 Sometimes, we need to find the MST of a~changing graph. \\
264 We insert/delete edges, the structure responds with $\O(1)$
265 modifications of the MST.
268 \item Unweighted cases, similar to dynamic connectivity:
270 \item Incremental: $\O(\alpha(n))$ \[Tarjan 1975]
271 \item Fully dynamic: $\O(\log^2 n)$ \[Holm et al.~2001]
274 \item Weighted cases are harder:
276 \item Decremental: $\O(\log^2 n)$ \[Holm et al.~2001]
277 \item Fully dynamic: $\O(\log^4 n)$ \[Holm et al.~2001]
278 \item Only~$C$ weights: $\O(C\log^2 n)$ \[M. 2008]
281 \item $K$ smallest spanning trees:
283 \item Simple: $\O(T_{MST} + Km)$ \[Katoh et al.~1981, M.~2008]
284 \item Small~$K$: $\O(T_{MST} + \min(K^2, Km + K\log K))$ \[Eppst.~1992]
285 \item Faster: $\O(T_{MST} + \min(K^{3/2}, Km^{1/2}))$ \[Frederickson 1997]
291 \begin{frame}{Back to Ranking}
293 Ranking of permutations on the RAM: \[M. \& Straka 2007]
296 \item We need a DS for the subsets of $\{1,\ldots,n\}$ with ranking
297 \item The result can be $n!$ $\Rightarrow$ word size is $\Omega(n\log n)$ bits
298 \item We can represent the subsets as RAM vectors
299 \item This gives us an~$\O(n)$ time algorithm for (un)ranking
304 Easily extendable to $k$-permutations, also in $\O(n)$
308 \begin{frame}{Restricted permutations}
310 For restricted permutations (e.g., derangements): \[M. 2008]
313 \item Describe restrictions by a~bipartite graph
314 \item Existence of permutation reduces to network flows
315 \item The ranking function can be used to calculate permanents,\\
316 so it is $\#\rm P$-complete
317 \item However, this is the only obstacle. Calculating $\O(n)$
318 sub-permanents is sufficient.
319 \item For derangements, we have achieved $\O(n)$ time after $\O(n^2)$ time
325 \begin{frame}{Summary}
330 \item Low-level algorithmic techniques on RAM and PM
331 \item Generalized pointer-based sorting and RAM vectors
332 \item Applied to a~variety of problems:
334 \item A~short linear-time tree isomorphism algorithm
335 \item A~linear-time algorithm for MST on minor-closed classes
336 \item Corrected and simplified MST verification
337 \item Dynamic MST with small weights
338 \item {\it Ranking and unranking of permutations}
342 \item A~lower bound for the Contractive Borùvka's algorithm
343 \item Simplified soft-heaps
349 \begin{frame}{Good Bye}
353 \centerline{\sc\huge The End}
358 \pgfdeclareimage[width=0.3\hsize]{brum}{brum2.png}