From: Martin Mares Date: Mon, 21 Apr 2008 14:07:36 +0000 (+0200) Subject: Spelling checker strikes again. X-Git-Tag: printed~53 X-Git-Url: http://mj.ucw.cz/gitweb/?a=commitdiff_plain;h=36012b9518adaee9da67e653cbc01f541a1273bf;p=saga.git Spelling checker strikes again. --- diff --git a/adv.tex b/adv.tex index d614c25..299da97 100644 --- a/adv.tex +++ b/adv.tex @@ -548,7 +548,7 @@ MST algorithm in Section~\ref{randmst}. MST verification has been studied by Koml\'os \cite{komlos:verify}, who has proven that $\O(m)$ edge comparisons are sufficient, but his algorithm needed -superlinear time to find the edges to compare. Dixon, Rauch and Tarjan +super-linear time to find the edges to compare. Dixon, Rauch and Tarjan have later shown in \cite{dixon:verify} that the overhead can be reduced to linear time on the RAM using preprocessing and table lookup on small subtrees. Later, King has given a~simpler algorithm in \cite{king:verifytwo}. @@ -1069,7 +1069,7 @@ good, but it will soon turn out that when we take~$T$ as the MST of a~randomly s subgraph, only a~small expected number of edges remains. Selecting a~subgraph at random will unavoidably produce disconnected subgraphs -at occassion, so we will drop the implicit assumption that all graphs are +at occasion, so we will drop the implicit assumption that all graphs are connected for this section and we will always search for the minimum spanning forest. As we already noted (\ref{disconn}), with a~little bit of care our algorithms and theorems keep working. @@ -1090,7 +1090,7 @@ Let us observe that we can obtain the forest~$F$ by running the Kruskal's algori by their weights and we start with an~empty forest~$F$. For each edge, we first flip a~biased coin (which gives heads with probability~$p$) and if it comes up tails, we discard the edge. Otherwise we perform a~single step of the Kruskal's -algoritm: We check whether $F+e$ contains a~cycle. If it does, we discard~$e$, otherwise +algorithm: We check whether $F+e$ contains a~cycle. If it does, we discard~$e$, otherwise we add~$e$ to~$F$. At the end, we have produced the subgraph~$H$ and its MSF~$F$. When we exchange the check for cycles with flipping the coin, we get an~equivalent @@ -1239,7 +1239,7 @@ again follows the recursion tree and it involves applying the Chernoff bound We could also use a~slightly different formulation of the sampling lemma suggested by Chan \cite{chan:backward}. It changes the selection of the subgraph~$H$ to choosing an~$mp$-edge subset of~$E(G)$ uniformly at random. The proof is then -a~straightforward application of the backward analysis method. We however prefered +a~straightforward application of the backward analysis method. We however preferred the Karger's original version, because generating a~random subset of a~given size requires an~unbounded number of random bits in the worst case. diff --git a/dyn.tex b/dyn.tex index c0d97e6..629b2b8 100644 --- a/dyn.tex +++ b/dyn.tex @@ -410,7 +410,7 @@ anyway.) } At the beginning, the graph contains no edges, so both invariants are trivially -satistifed. Newly inserted edges can enter level~0, which cannot break I1 nor~I2. +satisfied. Newly inserted edges can enter level~0, which cannot break I1 nor~I2. When we delete a~tree edge at level~$\ell$, we split a~tree~$T$ of~$F_\ell$ to two trees $T_1$ and~$T_2$. Without loss of generality, let us assume that $T_1$ is the diff --git a/mst.tex b/mst.tex index 63bd080..5db0463 100644 --- a/mst.tex +++ b/mst.tex @@ -313,7 +313,7 @@ the minimum spanning tree of the input graph. To prove that the procedure stops, let us notice that no edge is ever recolored, so we must run out of black edges after at most~$m$ steps. Recoloring to the same color is avoided by the conditions built in the rules, recoloring to -a different color would mean that the an edge would be both inside and outside~$T_{min}$ +a different color would mean that the edge would be both inside and outside~$T_{min}$ due to our Red and Blue lemmata. When no further rules can be applied, the Black lemma guarantees that all edges @@ -397,7 +397,7 @@ We assign a label to each tree and we keep a mapping from vertices to the labels of the trees they belong to. We scan all edges, map their endpoints to the particular trees and for each tree we maintain the lightest incident edge so far encountered. Instead of merging the trees one by one (which would be too -slow), we build an auxilliary graph whose vertices are the labels of the original +slow), we build an auxiliary graph whose vertices are the labels of the original trees and edges correspond to the chosen lightest inter-tree edges. We find connected components of this graph, these determine how are the original labels translated to the new labels. @@ -571,7 +571,7 @@ Then $G'$~has the same MST as~$G$. \proof Every spanning tree of~$G'$ is a spanning tree of~$G$. In the other direction: Loops can be never contained in a spanning tree. If there is a spanning tree~$T$ -containing a~removed edge~$e$ parallel to an edge~$e'\in G'$, exchaning $e'$ +containing a~removed edge~$e$ parallel to an edge~$e'\in G'$, exchanging $e'$ for~$e$ makes~$T$ lighter. (This is indeed the multigraph version of the Red lemma applied to a~two-edge cycle, as we will see in \ref{multimst}.) \qed @@ -601,7 +601,7 @@ out in time~$\O(m_i)$. \proof The only non-trivial parts are steps 6 and~7. Contractions can be handled similarly to the unions in the original Bor\o{u}vka's algorithm (see \ref{boruvkaiter}): -We build an auxillary graph containing only the selected edges~$e_k$, find +We build an auxiliary graph containing only the selected edges~$e_k$, find connected components of this graph and renumber vertices in each component to the identifier of the component. This takes $\O(m_i)$ time. diff --git a/opt.tex b/opt.tex index bfcfdcf..86402c6 100644 --- a/opt.tex +++ b/opt.tex @@ -804,7 +804,7 @@ computation results in the real MSF of~$G$ with the particular weights. The \df{time complexity} of a~decision tree is defined as its depth. It therefore bounds the number of comparisons spent on every path. (It need not be equal since some paths need not correspond to an~actual computation --- the sequence of outcomes -on the path could be unsatifisfiable.) +on the path could be unsatisfiable.) A~decision tree is called \df{optimal} if it is correct and its depth is minimum possible among the correct decision trees for the given graph. diff --git a/ram.tex b/ram.tex index 165e66d..9cd8244 100644 --- a/ram.tex +++ b/ram.tex @@ -426,7 +426,7 @@ We will therefore manage with a~weaker form of equivalence, based on some sort of graph encodings: \defn -A~\df{canonical encoding} of a~given labeled graph represented by adjancency lists +A~\df{canonical encoding} of a~given labeled graph represented by adjacency lists is obtained by running the depth-first search on the graph and recording its traces. We start with an~empty encoding. When we enter a~vertex, we assign an~identifier to it (again using a~yardstick to represent numbers) @@ -528,7 +528,7 @@ and expected $\O(n\sqrt{\log\log n})$ for randomized algorithms~\cite{hanthor:ra both in linear space. The Fusion trees themselves have very limited use in graph algorithms, but the -principles behind them are ubiquitious in many other data structures and these +principles behind them are ubiquitous in many other data structures and these will serve us well and often. We are going to build the theory of Q-heaps in Section \ref{qheaps}, which will later lead to a~linear-time MST algorithm for arbitrary integer weights in Section \ref{iteralg}. Other such structures @@ -607,7 +607,7 @@ for their encodings. The elements of a~vector~${\bf x}$ will be written as $x_0,\ldots,x_{d-1}$. \para -If we want to fit the whole vector in a~single word, the parameters $b$ and~$d$ must satisty +If we want to fit the whole vector in a~single word, the parameters $b$ and~$d$ must satisfy the condition $(b+1)d\le W$. By using multiple-precision arithmetics, we can encode all vectors satisfying $bd=\O(W)$. We will now describe how to translate simple vector manipulations to sequences of $\O(1)$ RAM operations @@ -923,8 +923,8 @@ which match a~string of~$S$. A~\df{compressed trie} is obtained from the trie by removing the vertices of outdegree~1 except for the root and marked vertices. -Whereever is a~directed path whose internal vertices have outdegree~1 and they carry -no mark, we replace this path by a~single edge labeled with the contatenation +Wherever is a~directed path whose internal vertices have outdegree~1 and they carry +no mark, we replace this path by a~single edge labeled with the concatenation of the original edges' labels. In both kinds of tries, we order the outgoing edges of every vertex by their labels diff --git a/rank.tex b/rank.tex index bcf7657..aa6379c 100644 --- a/rank.tex +++ b/rank.tex @@ -148,7 +148,7 @@ Then lexicographic ranking and unranking of permutations can be performed in tim \proof Let us analyse the above algorithms. The depth of the recursion is~$n$ and in each -nested invokation of the recursive procedure we perform a~constant number of operations. +nested invocation of the recursive procedure we perform a~constant number of operations. All of them are either trivial, or calculations of factorials (which can be precomputed in~$\O(n)$ time), or operations on the data structure. \qed @@ -195,7 +195,7 @@ on the data structure to allow order of elements dependent on the history of the structure (i.e., on the sequence of deletes performed so far). We can observe that although the algorithm no longer gives the lexicographic ranks, the unranking function is still an~inverse of the ranking function, because the sequence of deletes -from~$A$ is the same when both ranking and unraking. +from~$A$ is the same when both ranking and unranking. The implementation of the relaxed structure is straightforward. We store the set~$A$ in an~array~$\alpha$ and use the order of the elements in~$\alpha$ determine the @@ -365,7 +365,7 @@ each of them which stands on a~hole with an~element of~$[x]$. The right-hand side counts the same: We can obtain any such configuration by placing $k$~rooks on~$H$ first, labeling them with elements of~$\{2,\ldots,x\}$, placing additional $n-k$ rooks on the remaining rows and columns (there are $(n-k)!$ ways -how to do this) and labeling those of the the new rooks standing on a~hole with~1. +how to do this) and labeling those of the new rooks standing on a~hole with~1. \qed \cor @@ -422,7 +422,7 @@ See observations \ref{rooksobs} and~\ref{matchobs}. The diversity of the characterizations of restricted permutations brings both good and bad news. The good news is that we can use the plethora of known results on bipartite matchings. Most importantly, we can efficiently -determine whether there exists at least one permutation satistying a~given set of restrictions: +determine whether there exists at least one permutation satisfying a~given set of restrictions: \thm There is an~algorithm which decides in time $\O(n^{1/2}\cdot m)$ whether there exists @@ -741,6 +741,4 @@ as described above (\ref{rrankmod}). Each of the $n$~levels of recursion will th be looked up in a~table precalculated in quadratic time as shown in Corollary~\ref{nzeroprecalc}. \qed - - \endpart