\setbeamertemplate{navigation symbols}{}
\setbeamerfont{title page}{family=\rmfamily}
-\def\[#1]{~{\color{violet} [#1]}}
+\def\[#1]{\hskip0.3em{\color{violet} [#1]}}
\def\O{{\cal O}}
\begin{frame}
\end{itemize}
~
+\pause
Many variants exist, we will use the {\bf Word-RAM:}
\end{itemize}
~
+\pause
\begin{beamerboxesrounded}[upper=block title example,shadow=true]{Key differences}
\begin{itemize}
-\item PM has no arrays, we can emulate them in $\O(\log n)$
-\item PM has no arithmetics
+\item PM has no arrays, we can emulate them in $\O(\log n)$ time.
+\item PM has no arithmetics.
\end{itemize}
\end{beamerboxesrounded}
~
-\begin{example}
+\begin{example}[parallel search]
\def\sep{\;{\color{brown}0}}
\def\seq{\;{\color{brown}1}}
\def\sez{\;{\color{red}0}}
And then search for 3 by:
\begin{center}
-\begin{tabular}{rc}
- &\seq001\seq101\seq011\seq000 \\
-{\sc xor} &\sep011\sep011\sep011\sep011 \\
+\begin{tabular}{rcl}
+ &\seq001\seq101\seq011\seq000 & $(1,5,3,0)$ \\
+{\sc xor} &\sep011\sep011\sep011\sep011 & $(3,3,3,3)$ \\
\hline
&\seq010\seq110\seq000\seq011 \\
-$-$ &\sep001\sep001\sep001\sep001 \\
+$-$ &\sep001\sep001\sep001\sep001 & $(1,1,1,1)$ \\
\hline
&\seq001\seq101\sez111\seq010 \\
{\sc and} &\seq000\seq000\seq000\seq000 \\
\begin{itemize}
\item Classical algorithms \[Borùvka, Jarník-Prim, Kruskal]
\item Contractive: $\O(m\log n)$ using flattening on the PM \\
- (lower bound: \[M.])
+ (lower bound \[M.])
\item Iterated: $\O(m\,\beta(m,n))$ \[Fredman \& Tarjan~1987] \\
where $\beta(m,n) = \min\{ k: \log_2^{(k)} n \le m/n \}$
-\item Even better: $\O(m\,\alpha(m,n))$ using {\it soft heaps}\\
- \[Chazelle 1998, Pettie 1999]
+\item Even better: $\O(m\,\alpha(m,n))$ using {\it soft heaps}\hfil\break\[Chazelle 1998, Pettie 1999]
\item MST verification: $\O(m)$ on RAM \[King 1997, M. 2008]
-\item Randomized: $\O(m)$ w.h.p. \[Karger et al.~1995]
+\item Randomized: $\O(m)$ expected on RAM \[Karger et al.~1995]
\end{itemize}
\end{frame}
\begin{frame}{MST -- Special cases}
-We have $\O(m)$ time algorithms for:
+Cases for which we have an $\O(m)$ algorithm:
+
+~
+
+Special graph structure:
\begin{itemize}
\item Planar graphs \[Tarjan 1976, Matsui 1995, M. 2004] (PM)
\item Minor-closed classes \[Tarjan 1983, M. 2004] (PM)
+\item Dense graphs (by many of the general PM algorithms)
+\end{itemize}
+
+~
+\pause
+
+Or we can assume more about weights:
+
+\begin{itemize}
\item $\O(1)$ different weights \[folklore] (PM)
\item Integer weights \[Fredman \& Willard 1990] (RAM)
+\item Sorted weights (RAM)
\end{itemize}
\end{frame}
~
-However, there is a catch: \pause Nobody knows its complexity.
+However, there is a catch\alt<2->{:}{ \dots} \pause Nobody knows its complexity.
~
We know that it is $\O({\cal T}(m,n))$ where ${\cal T}(m,n)$ is the depth
-of the optimum MST decision tree.
+of the optimum MST decision tree. Any other algorithm provides an upper bound.
\pause
~
modifications of the MST.
\begin{itemize}
-\item Unweighted cases similar to dynamic connectivity:
+\item Unweighted cases, similar to dynamic connectivity:
\begin{itemize}
\item Incremental: $\O(\alpha(n))$ \[Tarjan 1975]
\item Fully dynamic: $\O(\log^2 n)$ \[Holm et al.~2001]
\end{itemize}
+\pause
\item Weighted cases are harder:
\begin{itemize}
\item Decremental: $\O(\log^2 n)$ \[Holm et al.~2001]
\item Fully dynamic: $\O(\log^4 n)$ \[Holm et al.~2001]
- \item Only~$W$ weights: $\O(W\log^2 n)$ \[M. 2008]
+ \item Only~$C$ weights: $\O(C\log^2 n)$ \[M. 2008]
\end{itemize}
-\item $K$ best spanning trees:
+\pause
+\item $K$ smallest spanning trees:
\begin{itemize}
\item Simple: $\O(T_{MST} + Km)$ \[Katoh et al.~1981, M.~2008]
- \item Reduced: $\O(T_{MST} + \min(K^2, Km + K\log K))$ \[Eppst.~1992]
- \item Large~$K$: $\O(T_{MST} + \min(K^{3/2}, Km^{1/2}))$ \[Frederickson 1997]
+ \item Small~$K$: $\O(T_{MST} + \min(K^2, Km + K\log K))$ \[Eppst.~1992]
+ \item Faster: $\O(T_{MST} + \min(K^{3/2}, Km^{1/2}))$ \[Frederickson 1997]
\end{itemize}
\end{itemize}
Ranking of permutations on the RAM: \[M. \& Straka 2007]
\begin{itemize}
-\item Traditional algorithms represent a~subset of $\{1,\ldots,n\}$
-\item The result can be $n!$ $\Rightarrow$ word size $=\Omega(n\log n)$ bits
+\item We need a DS for the subsets of $\{1,\ldots,n\}$ with ranking
+\item The result can be $n!$ $\Rightarrow$ word size is $\Omega(n\log n)$ bits
\item We can represent the subsets as RAM vectors
\item This gives us an~$\O(n)$ time algorithm for (un)ranking
\end{itemize}
For restricted permutations (e.g., derangements): \[M. 2008]
\begin{itemize}
+\item Describe restrictions by a~bipartite graph
\item Existence of permutation reduces to network flows
-\item The ranking problem can be used to calculate permanents,\\
+\item The ranking function can be used to calculate permanents,\\
so it is $\#\rm P$-complete
\item However, this is the only obstacle. Calculating $\O(n)$
sub-permanents is sufficient.
\end{frame}
-\begin{frame}{My contributions}
+\begin{frame}{Summary}
+
+Summary:\\
\begin{itemize}
-\item The lower bound for the Contractive Borùvka's algorithm
-\item A~simpler linear-time tree isomorphism algorithm.
-\item A~linear-time algorithm for MST on minor-closed classes.
-\item Corrected and simplified MST verification.
-\item Ranking and unranking algorithms.
+\item Low-level algorithmic techniques on RAM and PM
+\item Generalized pointer-based sorting and RAM vectors
+\item Applied to a~variety of problems:
+ \begin{itemize}
+ \item A~short linear-time tree isomorphism algorithm
+ \item A~linear-time algorithm for MST on minor-closed classes
+ \item Corrected and simplified MST verification
+ \item Dynamic MST with small weights
+ \item {\it Ranking and unranking of permutations}
+ \end{itemize}
+\item Also:
+ \begin{itemize}
+ \item A~lower bound for the Contractive Borùvka's algorithm
+ \item Simplified soft-heaps
+ \end{itemize}
\end{itemize}
\end{frame}
+\begin{frame}{Good Bye}
+
+\bigskip
+
+\centerline{\sc\huge The End}
+
+\bigskip
+
+\begin{figure}
+\pgfdeclareimage[width=0.3\hsize]{brum}{brum2.png}
+\pgfuseimage{brum}
+\end{figure}
+
+\end{frame}
+
\end{document}