- parallel algorithms: p243-cole (are there others?)
- bounded expansion classes?
- restricted cases and arborescences
-- verify: mention our simplifications
+- mention randomized algorithms (see remarks in Karger)
Models:
\defn
For a~class~$\cal H$ of graphs we define $\Forb({\cal H})$ as the class
-of graphs which do not contain any of the graphs in~$\cal H$ as a~minor.
+of graphs that do not contain any of the graphs in~$\cal H$ as a~minor.
We will call $\cal H$ the set of \df{forbidden (or excluded) minors} for this class.
We will often abbreviate $\Forb(\{M_1,\ldots,M_n\})$ to $\Forb(M_1,\ldots,M_n)$.
For graphs with edge density at least $\log n$, this algorithm runs in linear time.
\rem
-We can consider using other kinds of heaps which have the property that inserts
+We can consider using other kinds of heaps that have the property that inserts
and decreases are faster than deletes. Of course, the Fibonacci heaps are asymptotically
optimal (by the standard $\Omega(n\log n)$ lower bound on sorting by comparisons, see
for example \cite{clrs}), so the other data structures can improve only
We will simplify the problem even further: For an~arbitrary tree~$T$, we split each
query path $T[x,y]$ to two half-paths $T[x,a]$ and $T[a,y]$ where~$a$ is the
\df{lowest common ancestor} of~$x$ and~$y$ in~$T$. It is therefore sufficient to
-consider only paths which connect a~vertex with one of its ancestors.
+consider only paths that connect a~vertex with one of its ancestors.
When we combine the two transforms, we get:
\:For every son~$v$ of~$u$, process the edge $e=uv$:
\::Construct the array of tops~$T_e$ for the edge~$e$: Start with~$T_p$, remove
- the tops of the paths which do not contain~$e$ and add the vertex~$u$ itself
+ the tops of the paths that do not contain~$e$ and add the vertex~$u$ itself
if there is a~query path which has~$u$ as its top and which has bottom somewhere
in the subtree rooted at~$v$.
\::Prepare the array of the peaks~$P_e$: Start with~$P_p$, remove the entries
- corresponding to the tops which are no longer active. If $u$ became an~active
+ corresponding to the tops that are no longer active. If $u$ became an~active
top, append~$e$ to the array.
\::Finish~$P_e$:
\lemman{Precomputation of tables}
When~$f$ is a~function of two arguments computable in polynomial time, we can
-precompute a~table of the values of~$f$ for all values of arguments which fit
+precompute a~table of the values of~$f$ for all values of arguments that fit
in a~single slot. The precomputation takes $\O(n)$ time.
\proof
Run depth-first search on the tree, assign the depth of a~vertex when entering
it and construct its top mask when leaving it. The top mask can be obtained
by $\bor$-ing the masks of its sons, excluding the level of the sons and
-including the tops of all query paths which have their bottoms at the current vertex
+including the tops of all query paths that have their bottoms at the current vertex
(the depths of the tops are already assigned).
\qed
\itemize\ibull
\:\em{Small from small:} We use $\<Select>(T_e,T_p)$ to find the fields of~$P_p$
-which shall be deleted by a~subsequent call to \<SubList>. Pointers
+that shall be deleted by a~subsequent call to \<SubList>. Pointers
can be retained as they still refer to the same ancestor list.
\:\em{Big from big:} We can copy the whole~$P_p$, since the layout of the
We can also define it in terms of graphs:
\defn A~\df{Steiner tree} of a~weighted graph~$(G,w)$ with a~set~$M\subseteq V$
-of \df{mandatory notes} is a~tree~$T\subseteq G$ which contains all the mandatory
+of \df{mandatory notes} is a~tree~$T\subseteq G$ that contains all the mandatory
vertices and its weight is minimum possible.
For $M=V$ the Steiner tree is identical to the MST, but if we allow an~arbitrary
For a given graph~$G$ with weights $w:E(G)\rightarrow {\bb R}$:
\itemize\ibull
\:A~subgraph $H\subseteq G$ is called a \df{spanning subgraph} if $V(H)=V(G)$.
-\:A~\df{spanning tree} of $G$ is any its spanning subgraph which is a tree.
+\:A~\df{spanning tree} of $G$ is any its spanning subgraph that is a tree.
\:For any subgraph $H\subseteq G$ we define its \df{weight} $w(H):=\sum_{e\in E(H)} w(e)$.
When comparing two weights, we will use the terms \df{lighter} and \df{heavier} in the
obvious sense.
\lemman{Exchange property for trees}\id{xchglemma}%
Let $T$ and $T'$ be spanning trees of a common graph. Then there exists
-a sequence of edge exchanges which transforms $T$ to~$T'$. More formally,
+a sequence of edge exchanges that transforms $T$ to~$T'$. More formally,
there exists a sequence of spanning trees $T=T_0,T_1,\ldots,T_k=T'$ such that
$T_{i+1}=T_i - e_i + e_i^\prime$ where $e_i\in T_i$ and $e_i^\prime\in T'$.
\proof
By contradiction. Let $e$ be an edge painted blue as the lightest edge of a cut~$C$.
-If $e\not\in T_{min}$, then there must exist an edge $e'\in T_{min}$ which is
+If $e\not\in T_{min}$, then there must exist an edge $e'\in T_{min}$ that is
contained in~$C$ (take any pair of vertices separated by~$C$, the path
in~$T_{min}$ joining these vertices must cross~$C$ at least once). Exchanging
$e$ for $e'$ in $T_{min}$ yields an even lighter spanning tree since
We can do much better by using a binary
heap to hold all neighboring edges. In each iteration, we find and delete the
minimum edge from the heap and once we expand the tree, we insert the newly discovered
-neighboring edges to the heap while deleting the neighboring edges which become
+neighboring edges to the heap while deleting the neighboring edges that become
internal to the new tree. Since there are always at most~$m$ edges in the heap,
each heap operation takes $\O(\log m)=\O(\log n)$ time. For every edge, we perform
at most one insertion and at most one deletion, so we spend $\O(m\log n)$ time in total.
in which these costs are carefully balanced, leading for example to
a linear-time algorithm for MST in planar graphs.
-There are two definitions of edge contraction which differ when an edge of
+There are two definitions of edge contraction that differ when an edge of
a~triangle is contracted. Either we unify the other two edges to a single edge
or we keep them as two parallel edges, leaving us with a~multigraph. We will
use the multigraph version and we will show that we can easily reduce the multigraph
On the other hand, we are interested in polynomial-time algorithms only, so $\Theta(\log N)$-bit
numbers should be sufficient. In practice, we pick~$W$ to be the larger of
$\Theta(\log N)$ and the size of integers used in the algorithm's input and output.
- We will call an integer which fits in a~single memory cell a~\df{machine word.}
+ We will call an integer that fits in a~single memory cell a~\df{machine word.}
\endlist
Both restrictions easily avoid the problems of unbounded parallelism. The first
standard complexity classes, but the calculations of time and space complexity tend
to be somewhat tedious. What more, when compared with the RAM with restricted
word size, the complexities are usually exactly $\Theta(W)$ times higher.
-This does not hold in general (consider a~program which uses many small numbers
+This does not hold in general (consider a~program that uses many small numbers
and $\O(1)$ large ones), but it is true for the algorithms we are interested in.
Therefore we will always assume that the operations have unit cost and we make
sure that all numbers are limited by the available word size.
i.e., the smallest~$i$ such that $\alpha[i]=1$.
By a~combination of subtraction with $\bxor$, we create a~number
-which contains ones exactly at the position of $\<LSB>(\alpha)$ and below:
+that contains ones exactly at the position of $\<LSB>(\alpha)$ and below:
\alik{
\alpha&= \9\9\9\9\9\1\0\0\0\0\cr
\endalgo
\rem
-We have used a~plenty of constants which depend on the format of the vectors.
+We have used a~plenty of constants that depend on the format of the vectors.
Either we can write non-uniform programs (see \ref{nonuniform}) and use native constants,
or we can observe that all such constants can be easily manufactured. For example,
$(\0^b\1)^d = \1^{(b+1)d} / \1^{b+1} = (2^{(b+1)d}-1)/(2^{b+1}-1)$. The only exceptions
where~$\(x)$ disagrees with a~label. Before this point, all edges not taken by
the search were leading either to subtrees containing elements all smaller than~$x$
or all larger than~$x$ and the only values not known yet are those in the subtree
-below the edge which we currently consider. Now if $x[b]=0$ (and therefore $x<x_i$),
+below the edge that we currently consider. Now if $x[b]=0$ (and therefore $x<x_i$),
all values in that subtree have $x_j[b]=1$ and thus they are larger than~$x$. In the other
case, $x[b]=1$ and $x_j[b]=0$, so they are smaller.
\qed
for a~newly inserted element in constant time. However, the set is too large
to fit in a~vector and we cannot perform insertion on an~ordinary array in
constant time. This can be worked around by keeping the set in an~unsorted
-array together with a~vector containing the permutation which sorts the array.
+array together with a~vector containing the permutation that sorts the array.
We can then insert a~new element at an~arbitrary place in the array and just
update the permutation to reflect the correct order.
(a~vector of $\O(n\log k)$ bits; we will write $x_i$ for $X[\varrho(i)]$),
\:$B$ --- a~set of ``interesting'' bit positions
(a~sorted vector of~$\O(n\log W)$ bits),
-\:$G$ --- the function which maps the guides to the bit positions in~$B$
+\:$G$ --- the function that maps the guides to the bit positions in~$B$
(a~vector of~$\O(n\log k)$ bits),
\:precomputed tables of various functions.
\endlist
It remains to show how to translate the operations on~$A$ to operations on~$H$,
again stored as a~sorted vector~${\bf h}$. Insertion to~$A$ correspond to
deletion from~$H$ and vice versa. The rank of any~$x\in[n]$ in~$A$ is $x$ minus
-the number of holes which are smaller than~$x$, therefore $R_A(x)=x-R_H(x)$.
+the number of holes that are smaller than~$x$, therefore $R_A(x)=x-R_H(x)$.
To calculate $R_H(x)$, we can again use the vector operation \<Rank> from Algorithm \ref{vecops},
this time on the vector~$\bf h$.
\section{Restricted permutations}
-Another interesting class of combinatorial objects which can be counted and
+Another interesting class of combinatorial objects that can be counted and
ranked are restricted permutations. An~archetypal member of this class are
permutations without a~fixed point, i.e., permutations~$\pi$ such that $\pi(i)\ne i$
for all~$i$. These are also called \df{derangements} or \df{hatcheck permutations.}\foot{%
\nota\id{hatrank}%
As we already know, the hatcheck permutations correspond to restriction
-matrices which contain zeroes only on the main diagonal and graphs which are
+matrices that contain zeroes only on the main diagonal and graphs that are
complete bipartite with the matching $\{(i,i) : i\in[n]\}$ deleted. For
a~given order~$n$, we will call this matrix~$D_n$ and the graph~$G_n$ and
we will show that the submatrices of~$D_n$ share several nice properties:
First, there are $n_0(z-1,d)$ such permutations. On the other hand, we can divide
the them to two types depending on whether $\pi[1]=1$. Those having $\pi[1]\ne 1$
are exactly the $n_0(z,d)$ permutations satisfying~$M_z$. The others correspond to
-permutations $(\pi[2],\ldots,\pi[d])$ on $\{2,\ldots,d\}$ which satisfy~$M_z^{1,1}$,
+permutations $(\pi[2],\ldots,\pi[d])$ on $\{2,\ldots,d\}$ that satisfy~$M_z^{1,1}$,
so there are $n_0(z-1,d-1)$ of them.
\qed
The algorithm uses the matrix~$M$ only for computing~$N_0$ of its submatrices
and we have shown that this value depends only on the order of the matrix and
the number of zeroes in it. We will therefore replace maintenance of the matrix
-by remember the number~$z$ of its zeroes and the set~$Z$ which contains the elements
+by remember the number~$z$ of its zeroes and the set~$Z$ that contains the elements
$x\in A$ whose locations are restricted (there is a~zero anywhere in the $(R_A(x)+1)$-th
column of~$M$). In other words, every $x\in Z$ can appear at all positions in the
permutation except one (and these forbidden positions are different for different~$x$'s),