\section{Ranking and unranking}\id{ranksect}%
-The techniques for building efficient data structures on the RAM described
-in Chapter~\ref{ramchap} can be also used for a~variety of problems related
+The techniques for building efficient data structures on the RAM, which we have described
+in Chapter~\ref{ramchap}, can be also used for a~variety of problems related
to ranking of combinatorial structures. Generally, the problems are stated
in the following way:
\para
In this chapter, we will investigate how to compute the ranking and unranking
-functions for different sets efficiently. Usually, we will make use of the fact
+functions for different sets efficiently. Usually, we will observe
that the ranks (and hence the input and output of our algorithm) are large
numbers, so we can use the integers of a~similar magnitude to represent non-trivial
data structures.
This is frequently used to create arrays indexed by permutations: for example in Ruskey's algorithm
for finding Hamilton cycles in Cayley graphs (see~\cite{ruskey:ham} and \cite{ruskey:hce})
or when exploring state spaces of combinatorial puzzles like the Loyd's Fifteen \cite{ss:fifteen}.
-Many other applications are surveyed by Critani et al.~in~\cite{critani:rau} and in
+Many other applications are surveyed by Critani et al.~\cite{critani:rau} and in
most cases, the time complexity of the whole algorithm is limited by the efficiency
of the (un)ranking functions.
worst case \cite{reingold:catp} and even on average~\cite{liehe:raulow}.
This can be easily improved to $O(n\log n)$ by using either a binary search
tree to calculate inversions, or by a divide-and-conquer technique, or by clever
-use of modular arithmetic (all three algorithms are described in
+use of modular arithmetic (all three algorithms are described in Knuth
\cite{knuth:sas}). Myrvold and Ruskey \cite{myrvold:rank} mention further
improvements to $O(n\log n/\log \log n)$ by using the RAM data structures of Dietz
\cite{dietz:oal}.
We will view permutations on a~finite set $A\subseteq {\bb N}$ as ordered $\vert A\vert$-tuples
(in other words, arrays) containing every element of~$A$ exactly once. We will
use square brackets to index these tuples: $\pi=(\pi[1],\ldots,\pi[\vert A\vert])$,
-and sub-tuples: $\pi[i\ldots j] = (\pi[i],\ldots,\pi[j])$.
+and sub-tuples: $\pi[i\ldots j] = (\pi[i],\pi[i+1],\ldots,\pi[j])$.
The lexicographic ranking and unranking functions for the permutations on~$A$
will be denoted by~$L(\pi,A)$ and $L^{-1}(i,A)$ respectively.
The lexicographic order of two permutations $\pi$ and~$\pi'$ on the original set is then determined
by $\pi[1]$ and $\pi'[1]$ and only if these elements are equal, it is decided
by the lexicographic comparison of permutations $\pi[2\ldots n]$ and $\pi'[2\ldots n]$.
-Moreover, for fixed~$\pi[1]$ all permutations on the smaller set occur exactly
+Moreover, when we fix $\pi[1]$, all permutations on the smaller set occur exactly
once, so the rank of $\pi$ is $(\pi[1]-1)\cdot (n-1)!$ plus the rank of
$\pi[2\ldots n]$.
\>We can call $\<Unrank>(j,1,n,[n])$ for the unranking problem on~$[n]$, i.e., to calculate $L^{-1}(j,[n])$.
-\para
+\paran{Representation of sets}%
The most time-consuming parts of the above algorithms are of course operations
on the set~$A$. If we store~$A$ in a~data structure of a~known time complexity, the complexity
of the whole algorithm is easy to calculate:
We have $n^n \ge n! \ge \lfloor n/2\rfloor^{\lfloor n/2\rfloor}$, so $n\log n \ge \log n! \ge \lfloor n/2\rfloor\cdot\log \lfloor n/2\rfloor$.
\qed
-\thmn{Lexicographic ranking of permutations \cite{mm:rank}}
+\>Thus we get the following theorem:
+
+\thmn{Lexicographic ranking of permutations}
When we order the permutations on the set~$[n]$ lexicographically, both ranking
and unranking can be performed on the RAM in time~$\O(n)$.
\rem
We can also easily derive the non-lexicographic linear-time algorithm of Myrvold
and Ruskey~\cite{myrvold:rank} from our algorithm. We will relax the requirements
-on the data structure to allow order of elements dependent on the history of the
+on the data structure to allow the order of elements to depend on the history of the
structure (i.e., on the sequence of deletes performed so far). We can observe that
although the algorithm no longer gives the lexicographic ranks, the unranking function
is still an~inverse of the ranking function, because the sequence of deletes
-from~$A$ is the same when both ranking and unranking.
+from~$A$ is the same during both ranking and unranking.
The implementation of the relaxed structure is straightforward. We store the set~$A$
in an~array~$\alpha$ and use the order of the elements in~$\alpha$ determine the
\section{Ranking of \iftoc $k$\else{\secitfont k\/}\fi-permutations}
\id{kpranksect}
-The technique from the previous section can be also generalized to lexicographic ranking of
-\df{$k$-permutations,} that is of ordered $k$-tuples drawn from the set~$[n]$.
+The ideas from the previous section can be also generalized to lexicographic ranking of
+\df{$k$-permutations,} that is of ordered $k$-tuples of distinct elements drawn from the set~$[n]$.
There are $n^{\underline k} = n\cdot(n-1)\cdot\ldots\cdot(n-k+1)$
such $k$-permutations and they have a~recursive structure similar to the one of
the permutations. We will therefore use the same recursive scheme as before
to stop after the first~$k$ iterations. We will also replace the number $(n-i)!$
of permutations on the remaining elements by the number of $(k-i)$-permutations on the same elements,
i.e., by $(n-i)^{\underline{k-i}}$. As $(n-i)^{\underline{k-i}} = (n-i) \cdot (n-i-1)^{\underline{k-i-1}}$,
-we can precalculate all these numbers in linear time.
+we can precalculate all these values in linear time.
Unfortunately, the ranks of $k$-permutations can be much smaller, so we can no
longer rely on the same data structure fitting in a constant number of word-sized integers.
We do a minor side step by remembering the complement of~$A$ instead, that is
the set of the at most~$k$ elements we have already seen. We will call this set~$H$
(because it describes the ``holes'' in~$A$). Let us prove that $\Omega(k\log n)$ bits
-are needed to represent the rank, so the vector representation of~$H$ fits in
+are needed to represent the rank, so the vector representation of~$H$ certainly fits in
a~constant number of words.
\lemma
To calculate $R_H(x)$, we can again use the vector operation \<Rank> from Algorithm \ref{vecops},
this time on the vector~$\bf h$.
-The only operation we cannot translate directly is unranking in~$A$. We will
-therefore define an~auxiliary vector~$\bf r$ of the same size as~$\bf h$
+The only operation, which we cannot translate directly, is unranking in~$A$. We will
+therefore define an~auxiliary vector~$\bf r$ of the same size as~$\bf h$,
containing the ranks of the holes: $r_i=R_A(h_i)=h_i-R_H(h_i)=h_i-i$.
To find the $j$-th smallest element of~$A$, we locate the interval between
holes to which this element belongs: the interval is bordered from below by
we have just proven the following theorem, which brings this section to
a~happy ending:
-\thmn{Lexicographic ranking of $k$-permutations \cite{mm:rank}}
+\thmn{Lexicographic ranking of $k$-permutations}
When we order the $k$-per\-mu\-ta\-tions on the set~$[n]$ lexicographically, both
ranking and unranking can be performed on the RAM in time~$\O(k)$.
permutations without a~fixed point, i.e., permutations~$\pi$ such that $\pi(i)\ne i$
for all~$i$. These are also called \df{derangements} or \df{hatcheck permutations.}\foot{%
As the story in~\cite{matnes:idm} goes, once upon a~time there was a~hatcheck lady who
-was so confused that she was giving out the hats completely randomly. What is
+was so confused that she was giving out the hats completely at random. What is
the probability that none of the gentlemen receives his own hat?} We will present
a~general (un)ranking method for any class of restricted permutations and
derive a~linear-time algorithm for the derangements from it.
of the set~$[n]$. A~permutation $\pi\in{\cal P}$ satisfies the restrictions
if $(i,\pi(i))$ is an~edge of~$G$ for every~$i$.
+\paran{Boards and rooks}%
We will follow the path unthreaded by Kaplansky and Riordan
\cite{kaplansky:rooks} and charted by Stanley in \cite{stanley:econe}.
We will relate restricted permutations to placements of non-attacking
\defn
\itemize\ibull
\:A~\df{board} is the grid $B=[n]\times [n]$. It consists of $n^2$ \df{squares.}
-\:A~\df{trace} of a~permutation $\pi\in{\cal P}$ is the set of squares $T(\pi)=\{ (i,\pi(i)) ; i\in[n] \}$.
+\:A~\df{trace} of a~permutation $\pi\in{\cal P}$ is the set of squares \hbox{$T(\pi)=\{ (i,\pi(i)) ; i\in[n] \}$. \hskip-4em} %%HACK
\endlist
\obs\id{rooksobs}%
Let~$H\subseteq B$ be any set of holes in the board. Then:
\itemize\ibull
\:$N_j$ denotes the number of placements of $n$~rooks on the board such that exactly~$j$ of the rooks
-stand on holes. That is, $N_j := \#\{ \pi\in{\cal P} \mid \#(H\cap T(\pi)) = j \}$.
+stand on holes. That is:
+$$N_j := \bigl\vert\bigl\{ \pi\in{\cal P} \mathbin{\bigl\vert} \vert H\cap T(\pi) \vert = j \bigr\}\bigr\vert.$$
\:$r_k$ is the number of ways how to place $k$~rooks on the holes. In other words,
this is the number of $k$-element subsets of~$H$ such that no two elements share
a~common row or column.