From 5712fe2c0737d308cca007bae72c2a6bca767b34 Mon Sep 17 00:00:00 2001 From: Martin Mares Date: Wed, 30 Jan 2008 15:22:33 +0100 Subject: [PATCH] Reorganization: added Advanced MST Algorithms chapter. --- Makefile | 2 +- PLAN | 72 ++++++--- adv.tex | 429 ++++++++++++++++++++++++++++++++++++++++++++++++++ macros.tex | 1 + mst.tex | 450 +---------------------------------------------------- ram.tex | 2 +- saga.tex | 1 + 7 files changed, 484 insertions(+), 473 deletions(-) create mode 100644 adv.tex diff --git a/Makefile b/Makefile index 6b931d1..10c98f0 100644 --- a/Makefile +++ b/Makefile @@ -1,6 +1,6 @@ all: saga.ps -CHAPTERS=cover mst notation ram +CHAPTERS=cover mst notation ram adv %.dvi: %.tex macros.tex biblio.bib tex $< && if grep -q citation $*.aux ; then bibtex $* && tex $< && tex $< ; fi diff --git a/PLAN b/PLAN index 6ef6e4f..065eec9 100644 --- a/PLAN +++ b/PLAN @@ -1,36 +1,60 @@ -* Minimum spanning trees +* Minimum Spanning Trees + o The Problem o Basic properties o Red/Blue meta-algorithm o Classical algorithms - o Fredman-Tarjan algorithm - o ?? Chazelle ?? - o ?? Pettie ?? - o Minor-closed classes - o MST verification - o Randomized algorithms + o Contractive algorithms + +* Fine Details of Computation -* Integer data structures + o Models and machines + . Bit tricks + . Ranking sets + . Bitwise B-trees + . Q-Heaps - o Models of computation - o Bit tricks - o Ranking sets - o Bitwise B-trees - o Q-Heaps +* Advanced MST Algorithms + + o Minor-closed classes + o Fredman-Tarjan algorithm + . ?? Chazelle ?? + . ?? Pettie ?? + . MST verification + . Randomized algorithms * Ranking combinatorial objects - o Ranking of permutations: history - o Linear-time algorithm - o k-permutations - o Permutations with no fixed point - o ?? other objects ?? + . Ranking of permutations: history + . Linear-time algorithm + . k-permutations + . Permutations with no fixed point + . ?? other objects ?? * Dynamic MST algorithms - o (Semi-)dynamic algorithms - o Sleator-Tarjan trees - o ET-trees - o Fully dynamic connectivity - o Semi-dynamic MST - o Fully dynamic MST + . (Semi-)dynamic algorithms + . Sleator-Tarjan trees + . ET-trees + . Fully dynamic connectivity + . Semi-dynamic MST + . Fully dynamic MST + +TODO: + +- cite Eisner's tutorial \cite{eisner:tutorial} +- \cite{pettie:onlineverify} online lower bound +- mention Steiner trees +- mention matroids +- sorted weights +- mention disconnected graphs +- Euclidean MST +- Some algorithms (most notably Fredman-Tarjan) do not need flattening + +Notation: + +- \O(...) as a set? +- G has to be connected, so m=O(n) +- impedance mismatch in terminology: contraction of G along e vs. contraction of e. +- use \delta(X) notation +- unify use of n(G) vs. n diff --git a/adv.tex b/adv.tex new file mode 100644 index 0000000..dfee85d --- /dev/null +++ b/adv.tex @@ -0,0 +1,429 @@ +\ifx\endpart\undefined +\input macros.tex +\fi + +\chapter{Advanced MST Algorithms} + +\section{Minor-closed graph classes} + +The contractive algorithm given in section~\ref{contalg} has been found to perform +well on planar graphs, but in the general case its time complexity was not linear. +Can we find any broader class of graphs where the algorithm is still efficient? +The right context turns out to be the minor-closed graph classes, which are +closed under contractions and have bounded density. + +\defn +A~graph~$H$ is a \df{minor} of a~graph~$G$ iff it can be obtained +from a subgraph of~$G$ by a sequence of simple graph contractions (see \ref{simpcont}). + +\defn +A~class~$\cal C$ of graphs is \df{minor-closed}, when for every $G\in\cal C$ and +its every minor~$H$, the graph~$H$ lies in~$\cal C$ as well. A~class~$\cal C$ is called +\df{non-trivial} if at least one graph lies in~$\cal C$ and at least one lies outside~$\cal C$. + +\example +Non-trivial minor-closed classes include planar graphs and more generally graphs +embeddable in any fixed surface. Many nice properties of planar graphs extend +to these classes, too, most notably the linearity of the number of edges. + +\defn\id{density}% +Let $\cal C$ be a class of graphs. We define its \df{edge density} $\varrho(\cal C)$ +to be the infimum of all~$\varrho$'s such that $m(G) \le \varrho\cdot n(G)$ +holds for every $G\in\cal C$. + +\thmn{Density of minor-closed classes} +A~minor-closed class of graphs has finite edge density if and only if it is +a non-trivial class. + +\proof +See Theorem 6.1 in \cite{nesetril:minors}, which also lists some other equivalent conditions. +\qed + +\thmn{MST on minor-closed classes \cite{mm:mst}}\id{mstmcc}% +For any fixed non-trivial minor-closed class~$\cal C$ of graphs, Algorithm \ref{contbor} finds +the MST of any graph in this class in time $\O(n)$. (The constant hidden in the~$\O$ +depends on the class.) + +\proof +Following the proof for planar graphs (\ref{planarbor}), we denote the graph considered +by the algorithm at the beginning of the $i$-th iteration by~$G_i$ and its number of vertices +and edges by $n_i$ and $m_i$ respectively. Again the $i$-th phase runs in time $\O(m_i)$ +and $n_i \le n/2^i$, so it remains to show a linear bound for the $m_i$'s. + +Since each $G_i$ is produced from~$G_{i-1}$ by a sequence of edge contractions, +all $G_i$'s are minors of~$G$.\foot{Technically, these are multigraph contractions, +but followed by flattening, so they are equivalent to contractions on simple graphs.} +So they also belong to~$\cal C$ and by the previous theorem $m_i\le \varrho({\cal C})\cdot n_i$. +\qed + +\rem\id{nobatch}% +The contractive algorithm uses ``batch processing'' to perform many contractions +in a single step. It is also possible to perform contractions one edge at a~time, +batching only the flattenings. A~contraction of an edge~$uv$ can be done +in time~$\O(\deg(u))$ by removing all edges incident with~$u$ and inserting them back +with $u$ replaced by~$v$. Therefore we need to find a lot of vertices with small +degrees. The following lemma shows that this is always the case in minor-closed +classes. + +\lemman{Low-degree vertices}\id{lowdeg}% +Let $\cal C$ be a graph class with density~$\varrho$ and $G\in\cal C$ a~graph +with $n$~vertices. Then at least $n/2$ vertices of~$G$ have degree at most~$4\varrho$. + +\proof +Assume the contrary: Let there be at least $n/2$ vertices with degree +greater than~$4\varrho$. Then $\sum_v \deg(v) > n/2 +\cdot 4\varrho = 2\varrho n$, which is in contradiction with the number +of edges being at most $\varrho n$. +\qed + +\rem +The proof can be also viewed +probabilistically: let $X$ be the degree of a vertex of~$G$ chosen uniformly at +random. Then ${\bb E}X \le 2\varrho$, hence by the Markov's inequality +${\rm Pr}[X > 4\varrho] < 1/2$, so for at least $n/2$ vertices~$v$ we have +$\deg(v)\le 4\varrho$. + +\algn{Local Bor\o{u}vka's Algorithm \cite{mm:mst}}% +\algo +\algin A~graph~$G$ with an edge comparison oracle and a~parameter~$t\in{\bb N}$. +\:$T\=\emptyset$. +\:$\ell(e)\=e$ for all edges~$e$. +\:While $n(G)>1$: +\::While there exists a~vertex~$v$ such that $\deg(v)\le t$: +\:::Select the lightest edge~$e$ incident with~$v$. +\:::Contract~$G$ along~$e$. +\:::$T\=T + \ell(e)$. +\::Flatten $G$, removing parallel edges and loops. +\algout Minimum spanning tree~$T$. +\endalgo + +\thm +When $\cal C$ is a minor-closed class of graphs with density~$\varrho$, the +Local Bor\o{u}vka's Algorithm with the parameter~$t$ set to~$4\varrho$ +finds the MST of any graph from this class in time $\O(n)$. (The constant +in the~$\O$ depends on~the class.) + +\proof +Let us denote by $G_i$, $n_i$ and $m_i$ the graph considered by the +algorithm at the beginning of the $i$-th iteration of the outer loop, +and the number of its vertices and edges respectively. As in the proof +of the previous algorithm (\ref{mstmcc}), we observe that all the $G_i$'s +are minors of the graph~$G$ given as the input. + +For the choice $t=4\varrho$, the Lemma on low-degree vertices (\ref{lowdeg}) +guarantees that at least $n_i/2$ edges get selected in the $i$-th iteration. +Hence at least a half of the vertices participates in contractions, so +$n_i\le 3/4\cdot n_{i-1}$. Therefore $n_i\le n\cdot (3/4)^i$ and the algorithm terminates +after $\O(\log n)$ iterations. + +Each selected edge belongs to $\mst(G)$, because it is the lightest edge of +the trivial cut $\delta(v)$ (see the Blue Rule in \ref{rbma}). +The steps 6 and~7 therefore correspond to the operation +described by the Lemma on contraction of MST edges (\ref{contlemma}) and when +the algorithm stops, $T$~is indeed the minimum spanning tree. + +It remains to analyse the time complexity of the algorithm. Since $G_i\in{\cal C}$, we have +$m_i\le \varrho n_i \le \varrho n/2^i$. +We will show that the $i$-th iteration is carried out in time $\O(m_i)$. +Steps 5 and~6 run in time $\O(\deg(v))=\O(t)$ for each~$v$, so summed +over all $v$'s they take $\O(tn_i)$, which is linear for a fixed class~$\cal C$. +Flattening takes $\O(m_i)$, as already noted in the analysis of the Contracting +Bor\o{u}vka's Algorithm (see \ref{contiter}). + +The whole algorithm therefore runs in time $\O(\sum_i m_i) = \O(\sum_i n/2^i) = \O(n)$. +\qed + +\rem +For planar graphs, we can get a sharper version of the low-degree lemma, +showing that the algorithm works with $t=8$ as well (we had $t=12$ as +$\varrho=3$). While this does not change the asymptotic time complexity +of the algorithm, the constant-factor speedup can still delight the hearts of +its practical users. + +\lemman{Low-degree vertices in planar graphs}% +Let $G$ be a planar graph with $n$~vertices. Then at least $n/2$ vertices of~$v$ +have degree at most~8. + +\proof +It suffices to show that the lemma holds for triangulations (if there +are any edges missing, the situation can only get better) with at +least 3 vertices. Since $G$ is planar, $\sum_v \deg(v) < 6n$. +The numbers $d(v):=\deg(v)-3$ are non-negative and $\sum_v d(v) < 3n$, +so by the same argument as in the proof of the general lemma, for at least $n/2$ +vertices~$v$ it holds that $d(v) < 6$, hence $\deg(v) \le 8$. +\qed + +\rem\id{hexa}% +The constant~8 in the previous lemma is the best we can have. +Consider a $k\times k$ triangular grid. It has $n=k^2$ vertices, $\O(k)$ of them +lie on the outer face and have degrees at most~6, the remaining $n-\O(k)$ interior +vertices have degree exactly~6. Therefore the number of faces~$f$ is $6/3\cdot n=2n$, +ignoring terms of order $\O(k)$. All interior triangles can be properly colored with +two colors, black and white. Now add a~new vertex inside each white face and connect +it to all three vertices on the boundary of that face. This adds $f/2 \approx n$ +vertices of degree~3 and it increases the degrees of the original $\approx n$ interior +vertices to~9, therefore about a half of the vertices of the new planar graph +has degree~9. + +\figure{hexangle.eps}{\epsfxsize}{The construction from Remark~\ref{hexa}} + +%-------------------------------------------------------------------------------- + +\section{Using Fibonacci heaps} +\id{fibonacci} + +We have seen that the Jarn\'\i{}k's Algorithm \ref{jarnik} runs in $\O(m\log n)$ time +(and this bound can be easily shown to be tight). Fredman and Tarjan have shown a~faster +implementation in~\cite{ft:fibonacci} using their Fibonacci heaps. In this section, +we convey their results and we show several interesting consequences. + +The previous implementation of the algorithm used a binary heap to store all neighboring +edges of the cut~$\delta(T)$. Instead of that, we will remember the vertices adjacent +to~$T$ and for each such vertex~$v$ we will keep the lightest edge~$uv$ such that $u$~lies +in~$T$. We will call these edges \df{active edges} and keep them in a~heap, ordered by weight. + +When we want to extend~$T$ by the lightest edge of~$\delta(T)$, it is sufficient to +find the lightest active edge~$uv$ and add this edge to~$T$ together with a new vertex~$v$. +Then we have to update the active edges as follows. The edge~$uv$ has just ceased to +be active. We scan all neighbors~$w$ of the vertex~$v$. When $w$~is in~$T$, no action +is needed. If $w$~is outside~$T$ and it was not adjacent to~$T$ (there is no active edge +remembered for it so far), we set the edge~$vw$ as active. Otherwise we check the existing +active edge for~$w$ and replace it by~$vw$ if the new edge is lighter. + +The following algorithm shows how these operations translate to insertions, decreases +and deletions on the heap. + +\algn{Jarn\'\i{}k with active edges; Fredman and Tarjan \cite{ft:fibonacci}}\id{jarniktwo}% +\algo +\algin A~graph~$G$ with an edge comparison oracle. +\:$v_0\=$ an~arbitrary vertex of~$G$. +\:$T\=$ a tree containing just the vertex~$v_0$. +\:$H\=$ a~heap of active edges stored as pairs $(u,v)$ where $u\in T,v\not\in T$, ordered by the weights $w(vw)$, initially empty. +\:$A\=$ an~auxiliary array mapping vertices outside~$T$ to their active edges in the heap; initially all elements undefined. +\:\ all edges incident with~$v_0$ to~$H$ and update~$A$ accordingly. +\:While $H$ is not empty: +\::$(u,v)\=\(H)$. +\::$T\=T+uv$. +\::For all edges $vw$ such that $w\not\in T$: +\:::If there exists an~active edge~$A(w)$: +\::::If $vw$ is lighter than~$A(w)$, \ $A(w)$ to~$(v,w)$ in~$H$. +\:::If there is no such edge, then \ $(v,w)$ to~$H$ and set~$A(w)$. +\algout Minimum spanning tree~$T$. +\endalgo + +\thmn{Fibonacci heaps} The~Fibonacci heap performs the following operations +with the indicated amortized time complexities: +\itemize\ibull +\:\ (insertion of a~new element) in $\O(1)$, +\:\ (decreasing value of an~existing element) in $\O(1)$, +\:\ (merging of two heaps into one) in $\O(1)$, +\:\ (deletion of the minimal element) in $\O(\log n)$, +\:\ (deletion of an~arbitrary element) in $\O(\log n)$, +\endlist +\>where $n$ is the maximum number of elements present in the heap at the time of +the operation. + +\proof +See Fredman and Tarjan \cite{ft:fibonacci} for both the description of the Fibonacci +heap and the proof of this theorem. +\qed + +\thm +Algorithm~\ref{jarniktwo} with a~Fibonacci heap finds the MST of the input graph in time~$\O(m+n\log n)$. + +\proof +The algorithm always stops, because every edge enters the heap~$H$ at most once. +As it selects exactly the same edges as the original Jarn\'\i{}k's algorithm, +it gives the correct answer. + +The time complexity is $\O(m)$ plus the cost of the heap operations. The algorithm +performs at most one \ or \ per edge and exactly one \ +per vertex and there are at most $n$ elements in the heap at any given time, +so by the previous theorem the operations take $\O(m+n\log n)$ time in total. +\qed + +\cor +For graphs with edge density at least $\log n$, this algorithm runs in linear time. + +\rem +We can consider using other kinds of heaps which have the property that inserts +and decreases are faster than deletes. Of course, the Fibonacci heaps are asymptotically +optimal (by the standard $\Omega(n\log n)$ lower bound on sorting by comparisons, see +for example \cite{clrs}), so the other data structures can improve only +multiplicative constants or offer an~easier implementation. + +A~nice example is a~\df{$d$-regular heap} --- a~variant of the usual binary heap +in the form of a~complete $d$-regular tree. \, \ and other operations +involving bubbling the values up spend $\O(1)$ time at a~single level, so they run +in~$\O(\log_d n)$ time. \ and \ require bubbling down, which incurs +comparison with all~$d$ sons at every level, so they run in~$\O(d\log_d n)$. +With this structure, the time complexity of the whole algorithm +is $\O(nd\log_d n + m\log_d n)$, which suggests setting $d=m/n$, giving $\O(m\log_{m/n}n)$. +This is still linear for graphs with density at~least~$n^{1+\varepsilon}$. + +Another possibility is to use the 2-3-heaps \cite{takaoka:twothree} or Trinomial +heaps \cite{takaoka:trinomial}. Both have the same asymptotic complexity as Fibonacci +heaps (the latter even in worst case, but it does not matter here) and their +authors claim implementation advantages. + +\FIXME{Mention Thorup's Fibonacci-like heaps for integers?} + +\para +As we already noted, the improved Jarn\'\i{}k's algorithm runs in linear time +for sufficiently dense graphs. In some cases, it is useful to combine it with +another MST algorithm, which identifies a~part of the MST edges and contracts +the graph to increase its density. For example, we can perform several +iterations of the Contractive Bor\o{u}vka's algorithm and find the rest of the +MST by the above version of Jarn\'\i{}k's algorithm. + +\algn{Mixed Bor\o{u}vka-Jarn\'\i{}k} +\algo +\algin A~graph~$G$ with an edge comparison oracle. +\:Run $\log\log n$ iterations of the Contractive Bor\o{u}vka's algorithm (\ref{contbor}), + getting a~MST~$T_1$. +\:Run the Jarn\'\i{}k's algorithm with active edges (\ref{jarniktwo}) on the resulting + graph, getting a~MST~$T_2$. +\:Combine $T_1$ and~$T_2$ to~$T$ as in the Contraction lemma (\ref{contlemma}). +\algout Minimum spanning tree~$T$. +\endalgo + +\thm +The Mixed Bor\o{u}vka-Jarn\'\i{}k algorithm finds the MST of the input graph in time $\O(m\log\log n)$. + +\proof +Correctness follows from the Contraction lemma and from the proofs of correctness of the respective algorithms. +As~for time complexity: The first step takes $\O(m\log\log n)$ time +(by Lemma~\ref{contiter}) and it gradually contracts~$G$ to a~graph~$G'$ of size +$m'\le m$ and $n'\le n/\log n$. The second step then runs in time $\O(m'+n'\log n') = \O(m)$ +and both trees can be combined in linear time, too. +\qed + +\para +Actually, there is a~much better choice of the algorithms to combine: use the +improved Jarn\'\i{}k's algorithm multiple times, each time stopping after a~while. +The good choice of the stopping condition is to place a~limit on the size of the heap. +Start with an~arbitrary vertex, grow the tree as usually and once the heap gets too large, +conserve the current tree and start with a~different vertex and an~empty heap. When this +process runs out of vertices, it has identified a~sub-forest of the MST, so we can +contract the graph along the edges of~this forest and iterate. + +\algn{Iterated Jarn\'\i{}k; Fredman and Tarjan \cite{ft:fibonacci}} +\algo +\algin A~graph~$G$ with an edge comparison oracle. +\:$T\=\emptyset$. \cmt{edges of the MST} +\:$\ell(e)\=e$ for all edges~$e$. \cmt{edge labels as usually} +\:$m_0\=m$. +\:While $n>1$: \cmt{We will call iterations of this loop \df{phases}.} +\::$F\=\emptyset$. \cmt{forest built in the current phase} +\::$t\=2^{2m_0/n}$. \cmt{the limit on heap size} +\::While there is a~vertex $v_0\not\in F$: +\:::Run the improved Jarn\'\i{}k's algorithm (\ref{jarniktwo}) from~$v_0$, stop when: +\::::all vertices have been processed, or +\::::a~vertex of~$F$ has been added to the tree, or +\::::the heap had more than~$t$ elements. +\:::Denote the resulting tree~$R$. +\:::$F\=F\cup R$. +\::$T\=T\cup \ell[F]$. \cmt{Remember MST edges found in this phase.} +\::Contract~$G$ along all edges of~$F$ and flatten it. +\algout Minimum spanning tree~$T$. +\endalgo + +\nota +For analysis of the algorithm, let us denote the graph entering the $i$-th +phase by~$G_i$ and likewise with the other parameters. The trees from which +$F_i$~has been constructed will be called $R_i^1, \ldots, R_i^{z_i}$. The +non-indexed $G$, $m$ and~$n$ will correspond to the graph given as~input. + +\para +However the choice of the parameter~$t$ can seem mysterious, the following +lemma makes the reason clear: + +\lemma\id{ijphase}% +The $i$-th phase of the Iterated Jarn\'\i{}k's algorithm runs in time~$\O(m)$. + +\proof +During the phase, the heap always contains at most~$t_i$ elements, so it takes +time~$\O(\log t_i)=\O(m/n_i)$ to delete an~element from the heap. The trees~$R_i^j$ +are disjoint, so there are at most~$n_i$ \'s over the course of the phase. +Each edge is considered at most twice (once per its endpoint), so the number +of the other heap operations is~$\O(m_i)$. Together, it equals $\O(m_i + n_i\log t_i) = \O(m_i+m) = \O(m)$. +\qed + +\lemma +Unless the $i$-th phase is final, the forest~$F_i$ consists of at most $2m_i/t_i$ trees. + +\proof +As every edge of~$G_i$ is incident with at most two trees of~$F_i$, it is sufficient +to establish that there are at least~$t_i$ edges incident with the vertices of every +such tree~(*). + +The forest~$F_i$ evolves by additions of the trees~$R_i^j$. Let us consider the possibilities +how the algorithm could have stopped growing the tree~$R_i^j$: +\itemize\ibull +\:the heap had more than~$t_i$ elements (step~10): since the elements stored in the heap correspond + to some of the edges incident with vertices of~$R_i^j$, the condition~(*) is fulfilled; +\:the algorithm just added a~vertex of~$F_i$ to~$R_i^j$ (step~9): in this case, an~existing + tree of~$F_i$ is extended, so the number of edges incident with it cannot decrease;\foot{% + To make this true, we counted the edges incident with the \em{vertices} of the tree + instead of edges incident with the tree itself, because we needed the tree edges + to be counted as well.} +\:all vertices have been processed (step~8): this can happen only in the final phase. +\qeditem +\endlist + +\thm\id{itjarthm}% +The Iterated Jarn\'\i{}k's algorithm finds the MST of the input graph in time +$\O(m\timesbeta(m,n))$, where $\beta(m,n):=\min\{ i: \log^{(i)}n < m/n \}$. + +\proof +Phases are finite and in every phase at least one edge is contracted, so the outer +loop is eventually terminated. The resulting subgraph~$T$ is equal to $\mst(G)$, because each $F_i$ is +a~subgraph of~$\mst(G_i)$ and the $F_i$'s are glued together according to the Contraction +lemma (\ref{contlemma}). + +Let us bound the sizes of the graphs processed in individual phases. As the vertices +of~$G_{i+1}$ correspond to the components of~$F_i$, by the previous lemma $n_{i+1}\le +2m_i/t_i$. Then $t_{i+1} = 2^{2m/n_{i+1}} \ge 2^{2m/(2m_i/t_i)} = 2^{(m/m_i)\cdot t_i} \ge 2^{t_i}$, +therefore: +$$ +\left. \vcenter{\hbox{$\displaystyle t_i \ge 2^{2^{\scriptstyle 2^{\scriptstyle\vdots^{\scriptstyle m/n}}}} $}}\;\right\} +\,\hbox{a~tower of~$i$ exponentials.} +$$ +As soon as~$t_i\ge n$, the $i$-th phase must be final, because at that time +there is enough space in the heap to process the whole graph. So~there are +at most~$\beta(m,n)$ phases and we already know (Lemma~\ref{ijphase}) that each +phase runs in linear time. +\qed + +\cor +The Iterated Jarn\'\i{}k's algorithm runs in time $\O(m\log^* n)$. + +\proof +$\beta(m,n) \le \beta(1,n) = \log^* n$. +\qed + +\cor +When we use the Iterated Jarn\'\i{}k's algorithm on graphs with edge density +at least~$\log^{(k)} n$ for some $k\in{\bb N}^+$, it runs in time~$\O(km)$. + +\proof +If $m/n \ge \log^{(k)} n$, then $\beta(m,n)\le k$. +\qed + +\rem +Gabow et al.~\cite{gabow:mst} have shown how to speed this algorithm up to~$\O(m\log\beta(m,n))$. +They split the adjacency lists of the vertices to small buckets, keep each bucket +sorted and consider only the lightest edge in each bucket until it is removed. +The mechanics of the algorithm is complex and there is a~lot of technical details +which need careful handling, so we omit the description of this algorithm. + +\FIXME{Reference to Chazelle.} + +\FIXME{Reference to Q-Heaps.} + +%-------------------------------------------------------------------------------- + +\section{Verification of minimality} + + +\endpart diff --git a/macros.tex b/macros.tex index 2a99ea1..f04fa00 100644 --- a/macros.tex +++ b/macros.tex @@ -284,6 +284,7 @@ \advance\chapcount by 1 \seccount=0 \thmcount=0 +\footcnt=0 \edef\currentid{\the\chapcount} \leftline{\chapfont\currentid. #1} \bigskip diff --git a/mst.tex b/mst.tex index 7014cfa..10d89de 100644 --- a/mst.tex +++ b/mst.tex @@ -53,7 +53,7 @@ also presents several new ones. %-------------------------------------------------------------------------------- -\section{Basic Properties} +\section{Basic properties} In this section, we will examine the basic properties of spanning trees and prove several important theorems to base the algorithms upon. We will follow the theory @@ -217,7 +217,7 @@ is the MST of~$G_1$ if and only if $\pi[T]$ is the MST of~$G_2$. %-------------------------------------------------------------------------------- -\section{The Red-Blue Meta-Algorithm} +\section{The Red-Blue meta-algorithm} Most MST algorithms can be described as special cases of the following procedure (again following \cite{tarjan:dsna}): @@ -473,7 +473,7 @@ Follows from the above analysis. %-------------------------------------------------------------------------------- -\section{Contractive algorithms} +\section{Contractive algorithms}\id{contalg}% While the classical algorithms are based on growing suitable trees, they can be also reformulated in terms of edge contraction. Instead of keeping @@ -679,448 +679,4 @@ to finish on the remaining complete graph. Each iteration runs on a graph with $ edges as every $H_{a,k}$ contains a complete graph on~$a$ vertices. \qed -%-------------------------------------------------------------------------------- - -\section{Minor-closed graph classes} - -The contracting algorithm given in the previous section has been found to perform -well on planar graphs, but in the general case its time complexity was not linear. -Can we find any broader class of graphs where the algorithm is still efficient? -The right context turns out to be the minor-closed graph classes, which are -closed under contractions and have bounded density. - -\defn -A~graph~$H$ is a \df{minor} of a~graph~$G$ iff it can be obtained -from a subgraph of~$G$ by a sequence of simple graph contractions (see \ref{simpcont}). - -\defn -A~class~$\cal C$ of graphs is \df{minor-closed}, when for every $G\in\cal C$ and -its every minor~$H$, the graph~$H$ lies in~$\cal C$ as well. A~class~$\cal C$ is called -\df{non-trivial} if at least one graph lies in~$\cal C$ and at least one lies outside~$\cal C$. - -\example -Non-trivial minor-closed classes include planar graphs and more generally graphs -embeddable in any fixed surface. Many nice properties of planar graphs extend -to these classes, too, most notably the linearity of the number of edges. - -\defn\id{density}% -Let $\cal C$ be a class of graphs. We define its \df{edge density} $\varrho(\cal C)$ -to be the infimum of all~$\varrho$'s such that $m(G) \le \varrho\cdot n(G)$ -holds for every $G\in\cal C$. - -\thmn{Density of minor-closed classes} -A~minor-closed class of graphs has finite edge density if and only if it is -a non-trivial class. - -\proof -See Theorem 6.1 in \cite{nesetril:minors}, which also lists some other equivalent conditions. -\qed - -\thmn{MST on minor-closed classes \cite{mm:mst}}\id{mstmcc}% -For any fixed non-trivial minor-closed class~$\cal C$ of graphs, Algorithm \ref{contbor} finds -the MST of any graph in this class in time $\O(n)$. (The constant hidden in the~$\O$ -depends on the class.) - -\proof -Following the proof for planar graphs (\ref{planarbor}), we denote the graph considered -by the algorithm at the beginning of the $i$-th iteration by~$G_i$ and its number of vertices -and edges by $n_i$ and $m_i$ respectively. Again the $i$-th phase runs in time $\O(m_i)$ -and $n_i \le n/2^i$, so it remains to show a linear bound for the $m_i$'s. - -Since each $G_i$ is produced from~$G_{i-1}$ by a sequence of edge contractions, -all $G_i$'s are minors of~$G$.\foot{Technically, these are multigraph contractions, -but followed by flattening, so they are equivalent to contractions on simple graphs.} -So they also belong to~$\cal C$ and by the previous theorem $m_i\le \varrho({\cal C})\cdot n_i$. -\qed - -\rem\id{nobatch}% -The contractive algorithm uses ``batch processing'' to perform many contractions -in a single step. It is also possible to perform contractions one edge at a~time, -batching only the flattenings. A~contraction of an edge~$uv$ can be done -in time~$\O(\deg(u))$ by removing all edges incident with~$u$ and inserting them back -with $u$ replaced by~$v$. Therefore we need to find a lot of vertices with small -degrees. The following lemma shows that this is always the case in minor-closed -classes. - -\lemman{Low-degree vertices}\id{lowdeg}% -Let $\cal C$ be a graph class with density~$\varrho$ and $G\in\cal C$ a~graph -with $n$~vertices. Then at least $n/2$ vertices of~$G$ have degree at most~$4\varrho$. - -\proof -Assume the contrary: Let there be at least $n/2$ vertices with degree -greater than~$4\varrho$. Then $\sum_v \deg(v) > n/2 -\cdot 4\varrho = 2\varrho n$, which is in contradiction with the number -of edges being at most $\varrho n$. -\qed - -\rem -The proof can be also viewed -probabilistically: let $X$ be the degree of a vertex of~$G$ chosen uniformly at -random. Then ${\bb E}X \le 2\varrho$, hence by the Markov's inequality -${\rm Pr}[X > 4\varrho] < 1/2$, so for at least $n/2$ vertices~$v$ we have -$\deg(v)\le 4\varrho$. - -\algn{Local Bor\o{u}vka's Algorithm \cite{mm:mst}}% -\algo -\algin A~graph~$G$ with an edge comparison oracle and a~parameter~$t\in{\bb N}$. -\:$T\=\emptyset$. -\:$\ell(e)\=e$ for all edges~$e$. -\:While $n(G)>1$: -\::While there exists a~vertex~$v$ such that $\deg(v)\le t$: -\:::Select the lightest edge~$e$ incident with~$v$. -\:::Contract~$G$ along~$e$. -\:::$T\=T + \ell(e)$. -\::Flatten $G$, removing parallel edges and loops. -\algout Minimum spanning tree~$T$. -\endalgo - -\thm -When $\cal C$ is a minor-closed class of graphs with density~$\varrho$, the -Local Bor\o{u}vka's Algorithm with the parameter~$t$ set to~$4\varrho$ -finds the MST of any graph from this class in time $\O(n)$. (The constant -in the~$\O$ depends on~the class.) - -\proof -Let us denote by $G_i$, $n_i$ and $m_i$ the graph considered by the -algorithm at the beginning of the $i$-th iteration of the outer loop, -and the number of its vertices and edges respectively. As in the proof -of the previous algorithm (\ref{mstmcc}), we observe that all the $G_i$'s -are minors of the graph~$G$ given as the input. - -For the choice $t=4\varrho$, the Lemma on low-degree vertices (\ref{lowdeg}) -guarantees that at least $n_i/2$ edges get selected in the $i$-th iteration. -Hence at least a half of the vertices participates in contractions, so -$n_i\le 3/4\cdot n_{i-1}$. Therefore $n_i\le n\cdot (3/4)^i$ and the algorithm terminates -after $\O(\log n)$ iterations. - -Each selected edge belongs to $\mst(G)$, because it is the lightest edge of -the trivial cut $\delta(v)$ (see the Blue Rule in \ref{rbma}). -The steps 6 and~7 therefore correspond to the operation -described by the Lemma on contraction of MST edges (\ref{contlemma}) and when -the algorithm stops, $T$~is indeed the minimum spanning tree. - -It remains to analyse the time complexity of the algorithm. Since $G_i\in{\cal C}$, we have -$m_i\le \varrho n_i \le \varrho n/2^i$. -We will show that the $i$-th iteration is carried out in time $\O(m_i)$. -Steps 5 and~6 run in time $\O(\deg(v))=\O(t)$ for each~$v$, so summed -over all $v$'s they take $\O(tn_i)$, which is linear for a fixed class~$\cal C$. -Flattening takes $\O(m_i)$, as already noted in the analysis of the Contracting -Bor\o{u}vka's Algorithm (see \ref{contiter}). - -The whole algorithm therefore runs in time $\O(\sum_i m_i) = \O(\sum_i n/2^i) = \O(n)$. -\qed - -\rem -For planar graphs, we can get a sharper version of the low-degree lemma, -showing that the algorithm works with $t=8$ as well (we had $t=12$ as -$\varrho=3$). While this does not change the asymptotic time complexity -of the algorithm, the constant-factor speedup can still delight the hearts of -its practical users. - -\lemman{Low-degree vertices in planar graphs}% -Let $G$ be a planar graph with $n$~vertices. Then at least $n/2$ vertices of~$v$ -have degree at most~8. - -\proof -It suffices to show that the lemma holds for triangulations (if there -are any edges missing, the situation can only get better) with at -least 3 vertices. Since $G$ is planar, $\sum_v \deg(v) < 6n$. -The numbers $d(v):=\deg(v)-3$ are non-negative and $\sum_v d(v) < 3n$, -so by the same argument as in the proof of the general lemma, for at least $n/2$ -vertices~$v$ it holds that $d(v) < 6$, hence $\deg(v) \le 8$. -\qed - -\rem\id{hexa}% -The constant~8 in the previous lemma is the best we can have. -Consider a $k\times k$ triangular grid. It has $n=k^2$ vertices, $\O(k)$ of them -lie on the outer face and have degrees at most~6, the remaining $n-\O(k)$ interior -vertices have degree exactly~6. Therefore the number of faces~$f$ is $6/3\cdot n=2n$, -ignoring terms of order $\O(k)$. All interior triangles can be properly colored with -two colors, black and white. Now add a~new vertex inside each white face and connect -it to all three vertices on the boundary of that face. This adds $f/2 \approx n$ -vertices of degree~3 and it increases the degrees of the original $\approx n$ interior -vertices to~9, therefore about a half of the vertices of the new planar graph -has degree~9. - -\figure{hexangle.eps}{\epsfxsize}{The construction from Remark~\ref{hexa}} - -%-------------------------------------------------------------------------------- - -\section{Using Fibonacci heaps} -\id{fibonacci} - -We have seen that the Jarn\'\i{}k's Algorithm \ref{jarnik} runs in $\O(m\log n)$ time -(and this bound can be easily shown to be tight). Fredman and Tarjan have shown a~faster -implementation in~\cite{ft:fibonacci} using their Fibonacci heaps. In this section, -we convey their results and we show several interesting consequences. - -The previous implementation of the algorithm used a binary heap to store all neighboring -edges of the cut~$\delta(T)$. Instead of that, we will remember the vertices adjacent -to~$T$ and for each such vertex~$v$ we will keep the lightest edge~$uv$ such that $u$~lies -in~$T$. We will call these edges \df{active edges} and keep them in a~heap, ordered by weight. - -When we want to extend~$T$ by the lightest edge of~$\delta(T)$, it is sufficient to -find the lightest active edge~$uv$ and add this edge to~$T$ together with a new vertex~$v$. -Then we have to update the active edges as follows. The edge~$uv$ has just ceased to -be active. We scan all neighbors~$w$ of the vertex~$v$. When $w$~is in~$T$, no action -is needed. If $w$~is outside~$T$ and it was not adjacent to~$T$ (there is no active edge -remembered for it so far), we set the edge~$vw$ as active. Otherwise we check the existing -active edge for~$w$ and replace it by~$vw$ if the new edge is lighter. - -The following algorithm shows how these operations translate to insertions, decreases -and deletions on the heap. - -\algn{Jarn\'\i{}k with active edges; Fredman and Tarjan \cite{ft:fibonacci}}\id{jarniktwo}% -\algo -\algin A~graph~$G$ with an edge comparison oracle. -\:$v_0\=$ an~arbitrary vertex of~$G$. -\:$T\=$ a tree containing just the vertex~$v_0$. -\:$H\=$ a~heap of active edges stored as pairs $(u,v)$ where $u\in T,v\not\in T$, ordered by the weights $w(vw)$, initially empty. -\:$A\=$ an~auxiliary array mapping vertices outside~$T$ to their active edges in the heap; initially all elements undefined. -\:\ all edges incident with~$v_0$ to~$H$ and update~$A$ accordingly. -\:While $H$ is not empty: -\::$(u,v)\=\(H)$. -\::$T\=T+uv$. -\::For all edges $vw$ such that $w\not\in T$: -\:::If there exists an~active edge~$A(w)$: -\::::If $vw$ is lighter than~$A(w)$, \ $A(w)$ to~$(v,w)$ in~$H$. -\:::If there is no such edge, then \ $(v,w)$ to~$H$ and set~$A(w)$. -\algout Minimum spanning tree~$T$. -\endalgo - -\thmn{Fibonacci heaps} The~Fibonacci heap performs the following operations -with the indicated amortized time complexities: -\itemize\ibull -\:\ (insertion of a~new element) in $\O(1)$, -\:\ (decreasing value of an~existing element) in $\O(1)$, -\:\ (merging of two heaps into one) in $\O(1)$, -\:\ (deletion of the minimal element) in $\O(\log n)$, -\:\ (deletion of an~arbitrary element) in $\O(\log n)$, -\endlist -\>where $n$ is the maximum number of elements present in the heap at the time of -the operation. - -\proof -See Fredman and Tarjan \cite{ft:fibonacci} for both the description of the Fibonacci -heap and the proof of this theorem. -\qed - -\thm -Algorithm~\ref{jarniktwo} with a~Fibonacci heap finds the MST of the input graph in time~$\O(m+n\log n)$. - -\proof -The algorithm always stops, because every edge enters the heap~$H$ at most once. -As it selects exactly the same edges as the original Jarn\'\i{}k's algorithm, -it gives the correct answer. - -The time complexity is $\O(m)$ plus the cost of the heap operations. The algorithm -performs at most one \ or \ per edge and exactly one \ -per vertex and there are at most $n$ elements in the heap at any given time, -so by the previous theorem the operations take $\O(m+n\log n)$ time in total. -\qed - -\cor -For graphs with edge density at least $\log n$, this algorithm runs in linear time. - -\rem -We can consider using other kinds of heaps which have the property that inserts -and decreases are faster than deletes. Of course, the Fibonacci heaps are asymptotically -optimal (by the standard $\Omega(n\log n)$ lower bound on sorting by comparisons, see -for example \cite{clrs}), so the other data structures can improve only -multiplicative constants or offer an~easier implementation. - -A~nice example is a~\df{$d$-regular heap} --- a~variant of the usual binary heap -in the form of a~complete $d$-regular tree. \, \ and other operations -involving bubbling the values up spend $\O(1)$ time at a~single level, so they run -in~$\O(\log_d n)$ time. \ and \ require bubbling down, which incurs -comparison with all~$d$ sons at every level, so they run in~$\O(d\log_d n)$. -With this structure, the time complexity of the whole algorithm -is $\O(nd\log_d n + m\log_d n)$, which suggests setting $d=m/n$, giving $\O(m\log_{m/n}n)$. -This is still linear for graphs with density at~least~$n^{1+\varepsilon}$. - -Another possibility is to use the 2-3-heaps \cite{takaoka:twothree} or Trinomial -heaps \cite{takaoka:trinomial}. Both have the same asymptotic complexity as Fibonacci -heaps (the latter even in worst case, but it does not matter here) and their -authors claim implementation advantages. - -\FIXME{Mention Thorup's Fibonacci-like heaps for integers?} - -\para -As we already noted, the improved Jarn\'\i{}k's algorithm runs in linear time -for sufficiently dense graphs. In some cases, it is useful to combine it with -another MST algorithm, which identifies a~part of the MST edges and contracts -the graph to increase its density. For example, we can perform several -iterations of the Contractive Bor\o{u}vka's algorithm and find the rest of the -MST by the above version of Jarn\'\i{}k's algorithm. - -\algn{Mixed Bor\o{u}vka-Jarn\'\i{}k} -\algo -\algin A~graph~$G$ with an edge comparison oracle. -\:Run $\log\log n$ iterations of the Contractive Bor\o{u}vka's algorithm (\ref{contbor}), - getting a~MST~$T_1$. -\:Run the Jarn\'\i{}k's algorithm with active edges (\ref{jarniktwo}) on the resulting - graph, getting a~MST~$T_2$. -\:Combine $T_1$ and~$T_2$ to~$T$ as in the Contraction lemma (\ref{contlemma}). -\algout Minimum spanning tree~$T$. -\endalgo - -\thm -The Mixed Bor\o{u}vka-Jarn\'\i{}k algorithm finds the MST of the input graph in time $\O(m\log\log n)$. - -\proof -Correctness follows from the Contraction lemma and from the proofs of correctness of the respective algorithms. -As~for time complexity: The first step takes $\O(m\log\log n)$ time -(by Lemma~\ref{contiter}) and it gradually contracts~$G$ to a~graph~$G'$ of size -$m'\le m$ and $n'\le n/\log n$. The second step then runs in time $\O(m'+n'\log n') = \O(m)$ -and both trees can be combined in linear time, too. -\qed - -\para -Actually, there is a~much better choice of the algorithms to combine: use the -improved Jarn\'\i{}k's algorithm multiple times, each time stopping after a~while. -The good choice of the stopping condition is to place a~limit on the size of the heap. -Start with an~arbitrary vertex, grow the tree as usually and once the heap gets too large, -conserve the current tree and start with a~different vertex and an~empty heap. When this -process runs out of vertices, it has identified a~sub-forest of the MST, so we can -contract the graph along the edges of~this forest and iterate. - -\algn{Iterated Jarn\'\i{}k; Fredman and Tarjan \cite{ft:fibonacci}} -\algo -\algin A~graph~$G$ with an edge comparison oracle. -\:$T\=\emptyset$. \cmt{edges of the MST} -\:$\ell(e)\=e$ for all edges~$e$. \cmt{edge labels as usually} -\:$m_0\=m$. -\:While $n>1$: \cmt{We will call iterations of this loop \df{phases}.} -\::$F\=\emptyset$. \cmt{forest built in the current phase} -\::$t\=2^{2m_0/n}$. \cmt{the limit on heap size} -\::While there is a~vertex $v_0\not\in F$: -\:::Run the improved Jarn\'\i{}k's algorithm (\ref{jarniktwo}) from~$v_0$, stop when: -\::::all vertices have been processed, or -\::::a~vertex of~$F$ has been added to the tree, or -\::::the heap had more than~$t$ elements. -\:::Denote the resulting tree~$R$. -\:::$F\=F\cup R$. -\::$T\=T\cup \ell[F]$. \cmt{Remember MST edges found in this phase.} -\::Contract~$G$ along all edges of~$F$ and flatten it. -\algout Minimum spanning tree~$T$. -\endalgo - -\nota -For analysis of the algorithm, let us denote the graph entering the $i$-th -phase by~$G_i$ and likewise with the other parameters. The trees from which -$F_i$~has been constructed will be called $R_i^1, \ldots, R_i^{z_i}$. The -non-indexed $G$, $m$ and~$n$ will correspond to the graph given as~input. - -\para -However the choice of the parameter~$t$ can seem mysterious, the following -lemma makes the reason clear: - -\lemma\id{ijphase}% -The $i$-th phase of the Iterated Jarn\'\i{}k's algorithm runs in time~$\O(m)$. - -\proof -During the phase, the heap always contains at most~$t_i$ elements, so it takes -time~$\O(\log t_i)=\O(m/n_i)$ to delete an~element from the heap. The trees~$R_i^j$ -are disjoint, so there are at most~$n_i$ \'s over the course of the phase. -Each edge is considered at most twice (once per its endpoint), so the number -of the other heap operations is~$\O(m_i)$. Together, it equals $\O(m_i + n_i\log t_i) = \O(m_i+m) = \O(m)$. -\qed - -\lemma -Unless the $i$-th phase is final, the forest~$F_i$ consists of at most $2m_i/t_i$ trees. - -\proof -As every edge of~$G_i$ is incident with at most two trees of~$F_i$, it is sufficient -to establish that there are at least~$t_i$ edges incident with the vertices of every -such tree~(*). - -The forest~$F_i$ evolves by additions of the trees~$R_i^j$. Let us consider the possibilities -how the algorithm could have stopped growing the tree~$R_i^j$: -\itemize\ibull -\:the heap had more than~$t_i$ elements (step~10): since the elements stored in the heap correspond - to some of the edges incident with vertices of~$R_i^j$, the condition~(*) is fulfilled; -\:the algorithm just added a~vertex of~$F_i$ to~$R_i^j$ (step~9): in this case, an~existing - tree of~$F_i$ is extended, so the number of edges incident with it cannot decrease;\foot{% - To make this true, we counted the edges incident with the \em{vertices} of the tree - instead of edges incident with the tree itself, because we needed the tree edges - to be counted as well.} -\:all vertices have been processed (step~8): this can happen only in the final phase. -\qeditem -\endlist - -\thm\id{itjarthm}% -The Iterated Jarn\'\i{}k's algorithm finds the MST of the input graph in time -$\O(m\timesbeta(m,n))$, where $\beta(m,n):=\min\{ i: \log^{(i)}n < m/n \}$. - -\proof -Phases are finite and in every phase at least one edge is contracted, so the outer -loop is eventually terminated. The resulting subgraph~$T$ is equal to $\mst(G)$, because each $F_i$ is -a~subgraph of~$\mst(G_i)$ and the $F_i$'s are glued together according to the Contraction -lemma (\ref{contlemma}). - -Let us bound the sizes of the graphs processed in individual phases. As the vertices -of~$G_{i+1}$ correspond to the components of~$F_i$, by the previous lemma $n_{i+1}\le -2m_i/t_i$. Then $t_{i+1} = 2^{2m/n_{i+1}} \ge 2^{2m/(2m_i/t_i)} = 2^{(m/m_i)\cdot t_i} \ge 2^{t_i}$, -therefore: -$$ -\left. \vcenter{\hbox{$\displaystyle t_i \ge 2^{2^{\scriptstyle 2^{\scriptstyle\vdots^{\scriptstyle m/n}}}} $}}\;\right\} -\,\hbox{a~tower of~$i$ exponentials.} -$$ -As soon as~$t_i\ge n$, the $i$-th phase must be final, because at that time -there is enough space in the heap to process the whole graph. So~there are -at most~$\beta(m,n)$ phases and we already know (Lemma~\ref{ijphase}) that each -phase runs in linear time. -\qed - -\cor -The Iterated Jarn\'\i{}k's algorithm runs in time $\O(m\log^* n)$. - -\proof -$\beta(m,n) \le \beta(1,n) = \log^* n$. -\qed - -\cor -When we use the Iterated Jarn\'\i{}k's algorithm on graphs with edge density -at least~$\log^{(k)} n$ for some $k\in{\bb N}^+$, it runs in time~$\O(km)$. - -\proof -If $m/n \ge \log^{(k)} n$, then $\beta(m,n)\le k$. -\qed - -\rem -Gabow et al.~\cite{gabow:mst} have shown how to speed this algorithm up to~$\O(m\log\beta(m,n))$. -They split the adjacency lists of the vertices to small buckets, keep each bucket -sorted and consider only the lightest edge in each bucket until it is removed. -The mechanics of the algorithm is complex and there is a~lot of technical details -which need careful handling, so we omit the description of this algorithm. - -\FIXME{Reference to Chazelle.} - -\FIXME{Reference to Q-Heaps.} - -%-------------------------------------------------------------------------------- - -\section{Verification of minimality} - - -\section{What we ought to cite} - -Eisner's tutorial \cite{eisner:tutorial} - -\cite{pettie:onlineverify} online lower bound - -% use \para -% G has to be connected, so m=O(n) -% mention Steiner trees -% mention matroids -% sorted weights -% \O(...) as a set? -% impedance mismatch in terminology: contraction of G along e vs. contraction of e. -% use \delta(X) notation -% mention disconnected graphs -% unify use of n(G) vs. n -% Euclidean MST -% Some algorithms (most notably Fredman-Tarjan) do not need flattening -% more references to RAM - \endpart diff --git a/ram.tex b/ram.tex index c798933..987b961 100644 --- a/ram.tex +++ b/ram.tex @@ -196,7 +196,7 @@ to by the nodes of the tree. Both direct and indirect accesses to the memory can therefore be done in~$\O(W)$ time. Use standard algorithms for arithmetic on big numbers: $\O(W)$ per operation except for multiplication, division and remainders which take $\O(W^2)$.\foot{We could use more efficient arithmetic -algorithms, of course, but the quadratic bound it good enough for our purposes.} +algorithms, but the quadratic bound it good enough for our purposes.} \qed \FIXME{Add references.} diff --git a/saga.tex b/saga.tex index 752d01e..1661e9c 100644 --- a/saga.tex +++ b/saga.tex @@ -4,6 +4,7 @@ \input cover.tex \input mst.tex \input ram.tex +\input adv.tex \input notation.tex -- 2.39.2