From: Martin Mares Date: Sun, 4 May 2008 12:29:24 +0000 (+0200) Subject: Final typesetting: Chapters 1 and 2. X-Git-Tag: printed~9 X-Git-Url: http://mj.ucw.cz/gitweb/?a=commitdiff_plain;h=c5b57b993ede5af08f88c3dd033021e73ede7e3c;p=saga.git Final typesetting: Chapters 1 and 2. --- diff --git a/mst.tex b/mst.tex index e36ce9d..9729a06 100644 --- a/mst.tex +++ b/mst.tex @@ -147,7 +147,7 @@ Let us consider an edge $f\in T'\setminus T^*$. We want to show that $f$ is not $T^*$-light, i.e., that it is heavier than all edges on $T^*[f]$. The path $T^*[f]$ is either identical to the original path $T[f]$ (if $e\not\in T[f]$) or to $T[f] \symdiff C$, where $C$ is the cycle $T[e']+e'$. The former case is trivial, in the latter we have -$w(f)\ge w(e')$ due to the choice of $e'$ and all other edges on~$C$ are lighter +$w(f)\ge w(e')$ due to the choice of~$e'$ and all other edges on~$C$ are lighter than~$e'$ as $e'$ was not $T$-light. \qed @@ -183,6 +183,7 @@ edge exchanges going from $T_1$ to $T_2$. As all edge weights all distinct, these edge exchanges must be in fact strictly increasing. On the other hand, we know that $w(T_1)=w(T_2)$, so the exchange sequence must be empty and indeed $T_1$ and $T_2$ must be identical. +\looseness=1 %%HACK%% \qed \nota\id{mstnota}% @@ -404,13 +405,13 @@ application of the Blue rule. We stop when the blue subgraph is connected, so we do not need the Red rule to explicitly exclude edges. It remains to show that adding the edges simultaneously does not -produce a cycle. Consider the first iteration of the algorithm where $T$ contains a~cycle~$C$. Without +produce a~cycle. Consider the first iteration of the algorithm where $T$ contains a~cycle~$C$. Without loss of generality we can assume that: $$C=T_1[u_1,v_1]\,v_1u_2\,T_2[u_2,v_2]\,v_2u_3\,T_3[u_3,v_3]\, \ldots \,T_k[u_k,v_k]\,v_ku_1.$$ Each component $T_i$ has chosen its lightest incident edge~$e_i$ as either the edge $v_iu_{i+1}$ or $v_{i-1}u_i$ (indexing cyclically). Suppose that $e_1=v_1u_2$ (otherwise we reverse the orientation of the cycle). Then $e_2=v_2u_3$ and $w(e_2)w(e_2)>\ldots>w(e_k)>w(e_1)$, which is a contradiction. +getting $w(e_1)>w(e_2)>\ldots>w(e_k)>w(e_1)$, which is a~contradiction. (Note that distinctness of edge weights was crucial here.) \qed @@ -686,7 +687,7 @@ algorithms as well. The following lemma shows the gist: \lemman{Contraction of MST edges}\id{contlemma}% Let $G$ be a weighted graph, $e$~an arbitrary edge of~$\mst(G)$, $G/e$ the multigraph produced by contracting~$e$ in~$G$, and $\pi$ the bijection between edges of~$G-e$ and -their counterparts in~$G/e$. Then: $$\mst(G) = \pi^{-1}[\mst(G/e)] + e.$$ +their counterparts in~$G/e$. Then $\mst(G) = \pi^{-1}[\mst(G/e)] + e.$ \proof % We seem not to need this lemma for multigraphs... @@ -740,7 +741,7 @@ have arbitrary weights that are heavier than the edges of all the distractors. \figure{hedgehog.eps}{\epsfxsize}{A~hedgehog $H_{5,2}$ (quills bent to fit in the picture)} \lemma -A~single iteration of the contractive algorithm reduces~$H_{a,k}$ to a graph isomorphic with $H_{a,k-1}$. +A~single iteration of the contractive algorithm reduces~$H_{a,k}$ to a~graph isomorphic with $H_{a,k-1}$. \proof Each vertex is incident with an edge of some distractor, so the algorithm does not select diff --git a/ram.tex b/ram.tex index 815aefc..c3fcf1f 100644 --- a/ram.tex +++ b/ram.tex @@ -367,7 +367,7 @@ be performed on the Pointer Machine in time $\O(n + \sum_i \vert S_i \vert)$. The first linear-time algorithm that partitions all subtrees to isomorphism equivalence classes is probably due to Zemlayachenko \cite{zemlay:treeiso}, but it lacks many details. Dinitz et al.~\cite{dinitz:treeiso} have recast this algorithm in modern -terminology and filled the gaps. Our algorithm is easier to formulate, +terminology and filled the gaps. Our algorithm is easier to formulate than those, because it replaces the need for auxiliary data structures by more elaborate bucket sorting. @@ -420,7 +420,8 @@ of representing the edge weights as labels, unless there is only a~fixed amount of possible values. As in the case of tree decompositions, we would like to identify the equivalent subgraphs -and process only a~single instance from each equivalence class. The obstacle is that +and process only a~single instance from each equivalence class. We need to be careful +with the definition of the equivalence classes, because graph isomorphism is known to be computationally hard (it is one of the few problems that are neither known to lie in~$\rm P$ nor to be $\rm NP$-complete; see Arvind and Kurur \cite{arvind:isomorph} for recent results on its complexity). @@ -492,6 +493,7 @@ redirect them to the actual input vertices, which we can do by associating the output vertices that refer to an~input vertex with the corresponding places in the encoding of the input graph. This way, the whole output can be generated in time $\O(\Vert\C\Vert + \Vert{\cal G}\Vert)$. +\looseness=1 %%HACK%% \qed \rem @@ -600,8 +602,8 @@ we will use $x$ and $\(x)$ interchangeably to avoid outbreak of symbols. \defn The \df{bitwise encoding} of a~vector ${\bf x}=(x_0,\ldots,x_{d-1})$ of~$b$-bit numbers -is an~integer~$x$ such that $\(x)=\(x_{d-1})_b\0\(x_{d-2})_b\0\ldots\0\(x_0)_b$, i.e., -$x = \sum_i 2^{(b+1)i}\cdot x_i$. (We have interspersed the elements with \df{separator bits.}) +is an~integer~$x$ such that $\(x)=\(x_{d-1})_b\0\(x_{d-2})_b\0\ldots\0\(x_0)_b$. In other +words, $x = \sum_i 2^{(b+1)i}\cdot x_i$. (We have interspersed the elements with \df{separator bits.}) \notan{Vectors}\id{vecnota}% We will use boldface letters for vectors and the same letters in normal type @@ -676,7 +678,7 @@ If we want to avoid division, we can use double-precision multiplication instead \[r_{d-1}] \dd \[r_2] \[r_1] \[s_d] \dd \[s_3] \[s_2] \[s_1] \cr } -This way, we also get the vector of all partial sums: +This way, we also get all partial sums: $s_k=\sum_{i=0}^{k-1}x_i$, $r_k=\sum_{i=k}^{d-1}x_i$. \:$\(x,y)$ --- Compare vectors ${\bf x}$ and~${\bf y}$ element-wise, @@ -817,7 +819,7 @@ both calls to \ we have a $\sqrt{w}$-bit number in a~$w$-bit word, so we can use the previous algorithm. The same trick of course applies to for finding the \, too. -The following algorithm shows the details. +The following algorithm shows the details: \algn{LSB in linear workspace} @@ -1214,7 +1216,7 @@ time using the results of section \ref{bitsect} and Lemma \ref{qhxtract}. \paran{Combining Q-heaps}% We can also use the Q-heaps as building blocks of more complex structures like Atomic heaps and AF-heaps (see once again \cite{fw:transdich}). We will -show a~simpler, but often sufficient construction, sometimes called the \df{Q-heap tree.} +show a~simpler, but often sufficient construction, sometimes called the \df{\hbox{Q-heap} tree.} Suppose we have a~Q-heap of capacity~$k$ and a~parameter $d\in{\bb N}^+$. We can build a~balanced $k$-ary tree of depth~$d$ such that its leaves contain a~given set and every internal vertex keeps the minimum value in the subtree