5 \chapter{Ranking Combinatorial Structures}
8 \section{Ranking and unranking}\id{ranksect}%
10 The techniques for building efficient data structures on the RAM, which we have described
11 in Chapter~\ref{ramchap}, can be also used for a~variety of problems related
12 to ranking of combinatorial structures. Generally, the problems are stated
16 Let~$C$ be a~set of objects and~$\prec$ a~linear order on~$C$. The \df{rank}
17 $R_{C,\prec}(x)$ of an~element $x\in C$ is the number of elements $y\in C$ such that $y\prec x$.
18 We will call the function $R_{C,\prec}$ the \df{ranking function} for $C$ ordered by~$\prec$
19 and its inverse $R^{-1}_{C,\prec}$ the \df{unranking function} for $C$ and~$\prec$. When the set
20 and the order are clear from the context, we will use plain~$R(x)$ and $R^{-1}(x)$.
21 Also, when $\prec$ is defined on a~superset~$C'$ of~$C$, we naturally extend $R_C(x)$
22 to elements $x\in C'\setminus C$.
25 Let us consider the set $C_k=\{\0,\1\}^k$ of all binary strings of length~$k$ ordered
26 lexicographically. Then $R^{-1}(i)$ is the $i$-th smallest element of this set, that
27 is the number~$i$ written in binary and padded to~$k$ digits (i.e., $\(i)_k$ in the
28 notation of Section~\ref{bitsect}). Obviously, $R(x)$ is the integer whose binary
29 representation is the string~$x$.
32 In this chapter, we will investigate how to compute the ranking and unranking
33 functions for different sets efficiently. Usually, we will observe
34 that the ranks (and hence the input and output of our algorithm) are large
35 numbers, so we can use the integers of a~similar magnitude to represent non-trivial
39 Until the end of the chapter, we will always assume that our model of computation
40 is the Random Access Machine (more specifically, the Word-RAM).
42 %--------------------------------------------------------------------------------
44 \section{Ranking of permutations}
47 One of the most common ranking problems is ranking of permutations on the set~$[n]=\{1,2,\ldots,n\}$.
48 This is frequently used to create arrays indexed by permutations: for example in Ruskey's algorithm
49 for finding Hamilton cycles in Cayley graphs (see~\cite{ruskey:ham} and \cite{ruskey:hce})
50 or when exploring state spaces of combinatorial puzzles like the Loyd's Fifteen \cite{ss:fifteen}.
51 Many other applications are surveyed by Critani et al.~\cite{critani:rau} and in
52 most cases, the time complexity of the whole algorithm is limited by the efficiency
53 of the (un)ranking functions.
55 The permutations are usually ranked according to their lexicographic order.
56 In fact, an~arbitrary order is often sufficient if the ranks are used solely
57 for indexing of arrays. The lexicographic order however has an~additional advantage
58 of a~nice structure, which allows various operations on permutations to be
59 performed directly on their ranks.
61 Na\"\i{}ve algorithms for lexicographic ranking require time $\Theta(n^2)$ in the
62 worst case \cite{reingold:catp} and even on average~\cite{liehe:raulow}.
63 This can be easily improved to $O(n\log n)$ by using either a binary search
64 tree to calculate inversions, or by a divide-and-conquer technique, or by clever
65 use of modular arithmetic (all three algorithms are described in Knuth
66 \cite{knuth:sas}). Myrvold and Ruskey \cite{myrvold:rank} mention further
67 improvements to $O(n\log n/\log \log n)$ by using the RAM data structures of Dietz
70 Linear time complexity was reached by Myrvold and Ruskey \cite{myrvold:rank}
71 for a~non-lexicographic order, which is defined locally by the history of the
72 data structure --- in fact, they introduce a linear-time unranking algorithm
73 first and then they derive an inverse algorithm without describing the order
74 explicitly. However, they leave the problem of lexicographic ranking open.
76 We will describe a~general procedure which, when combined with suitable
77 RAM data structures, yields a~linear-time algorithm for lexicographic
81 We will view permutations on a~finite set $A\subseteq {\bb N}$ as ordered $\vert A\vert$-tuples
82 (in other words, arrays) containing every element of~$A$ exactly once. We will
83 use square brackets to index these tuples: $\pi=(\pi[1],\ldots,\pi[\vert A\vert])$,
84 and sub-tuples: $\pi[i\ldots j] = (\pi[i],\pi[i+1],\ldots,\pi[j])$.
85 The lexicographic ranking and unranking functions for the permutations on~$A$
86 will be denoted by~$L(\pi,A)$ and $L^{-1}(i,A)$ respectively.
89 Let us first observe that permutations have a simple recursive structure.
90 If we fix the first element $\pi[1]$ of a~permutation~$\pi$ on the set~$[n]$, the
91 elements $\pi[2], \ldots, \pi[n]$ form a~permutation on $[n]-\{\pi[1]\} = \{1,\ldots,\pi[1]-1,\pi[1]+1,\ldots,n\}$.
92 The lexicographic order of two permutations $\pi$ and~$\pi'$ on the original set is then determined
93 by $\pi[1]$ and $\pi'[1]$ and only if these elements are equal, it is decided
94 by the lexicographic comparison of permutations $\pi[2\ldots n]$ and $\pi'[2\ldots n]$.
95 Moreover, when we fix $\pi[1]$, all permutations on the smaller set occur exactly
96 once, so the rank of $\pi$ is $(\pi[1]-1)\cdot (n-1)!$ plus the rank of
99 This gives us a~reduction from (un)ranking of permutations on $[n]$ to (un)rank\-ing
100 of permutations on a $(n-1)$-element set, which suggests a straightforward
101 algorithm, but unfortunately this set is different from $[n-1]$ and it even
102 depends on the value of~$\pi[1]$. We could renumber the elements to get $[n-1]$,
103 but it would require linear time per iteration. To avoid this, we generalize the
104 problem to permutations on subsets of $[n]$. For a permutation $\pi$ on a~set
105 $A\subseteq [n]$ of size~$m$, similar reasoning gives a~simple formula:
107 L((\pi[1],\ldots,\pi[m]),A) = R_A(\pi[1]) \cdot (m-1)! +
108 L((\pi[2],\ldots,\pi[m]), A\setminus\{\pi[1]\}),
110 which uses the ranking function~$R_A$ for~$A$. This recursive formula immediately
111 translates to the following recursive algorithms for both ranking and unranking
112 (described for example in \cite{knuth:sas}):
114 \alg $\<Rank>(\pi,i,n,A)$: Compute the rank of a~permutation $\pi[i\ldots n]$ on~$A$.
117 \:If $i\ge n$, return~0.
119 \:$b\=\<Rank>(\pi,i+1,n,A \setminus \{\pi[i]\})$.
120 \:Return $a\cdot(n-i)! + b$.
123 \>We can call $\<Rank>(\pi,1,n,[n])$ for ranking on~$[n]$, i.e., to calculate
126 \alg $\<Unrank>(j,i,n,A)$: Return an~array~$\pi$ such that $\pi[i\ldots n]$ is the $j$-th permutation on~$A$.
129 \:If $i>n$, return $(0,\ldots,0)$.
130 \:$x\=R^{-1}_A(\lfloor j/(n-i)! \rfloor)$.
131 \:$\pi\=\<Unrank>(j\bmod (n-i)!,i+1,n,A\setminus \{x\})$.
136 \>We can call $\<Unrank>(j,1,n,[n])$ for the unranking problem on~$[n]$, i.e., to calculate $L^{-1}(j,[n])$.
138 \paran{Representation of sets}%
139 The most time-consuming parts of the above algorithms are of course operations
140 on the set~$A$. If we store~$A$ in a~data structure of a~known time complexity, the complexity
141 of the whole algorithm is easy to calculate:
143 \lemma\id{ranklemma}%
144 Suppose that there is a~data structure maintaining a~subset of~$[n]$ under a~sequence
145 of deletions, which supports ranking and unranking of elements, and that
146 the time complexity of a~single operation is at most~$t(n)$.
147 Then lexicographic ranking and unranking of permutations can be performed in time $\O(n\cdot t(n))$.
150 Let us analyse the above algorithms. The depth of the recursion is~$n$ and in each
151 nested invocation of the recursive procedure we perform a~constant number of operations.
152 All of them are either trivial, or calculations of factorials (which can be precomputed in~$\O(n)$ time),
153 or operations on the data structure.
157 If we store~$A$ in an~ordinary array, we have insertion and deletion in constant time,
158 but ranking and unranking in~$\O(n)$, so $t(n)=\O(n)$ and the algorithm is quadratic.
159 Binary search trees give $t(n)=\O(\log n)$. The data structure of Dietz \cite{dietz:oal}
160 improves it to $t(n)=O(\log n/\log \log n)$. In fact, all these variants are equivalent
161 to the classical algorithms based on inversion vectors, because at the time of processing~$\pi[i]$,
162 the value of $R_A(\pi[i])$ is exactly the number of elements forming inversions with~$\pi[i]$.
165 To obtain linear time complexity, we will make use of the representation of
166 vectors by integers on the RAM as developed in Section~\ref{bitsect}, but first
167 of all, we will make sure that the ranks are large numbers, so the word size of the
168 machine has to be large as well:
171 $\log n! = \Theta(n\log n)$, therefore the word size must be~$\Omega(n\log n)$.
174 We have $n^n \ge n! \ge \lfloor n/2\rfloor^{\lfloor n/2\rfloor}$, so $n\log n \ge \log n! \ge \lfloor n/2\rfloor\cdot\log \lfloor n/2\rfloor$.
177 \>Thus we get the following theorem:
179 \thmn{Lexicographic ranking of permutations}
180 When we order the permutations on the set~$[n]$ lexicographically, both ranking
181 and unranking can be performed on the RAM in time~$\O(n)$.
184 We will store the elements of the set~$A$ in a~sorted vector. Each element has
185 $\O(\log n)$ bits, so the whole vector takes $\O(n\log n)$ bits, which by the
186 above observation fits in a~constant number of machine words. We know from
187 Algorithm~\ref{vecops} that ranks can be calculated in constant time in such
188 vectors and that insertions and deletions can be translated to ranks and
189 masking. Unranking, that is indexing of the vector, is masking alone.
190 So we can apply the previous Lemma \ref{ranklemma} with $t(n)=\O(1)$.
194 We can also easily derive the non-lexicographic linear-time algorithm of Myrvold
195 and Ruskey~\cite{myrvold:rank} from our algorithm. We will relax the requirements
196 on the data structure to allow the order of elements to depend on the history of the
197 structure (i.e., on the sequence of deletes performed so far). We can observe that
198 although the algorithm no longer gives the lexicographic ranks, the unranking function
199 is still an~inverse of the ranking function, because the sequence of deletes
200 from~$A$ is the same during both ranking and unranking.
202 The implementation of the relaxed structure is straightforward. We store the set~$A$
203 in an~array~$\alpha$ and use the order of the elements in~$\alpha$ determine the
204 order on~$A$. We will also maintain an~``inverse'' array $\alpha^{-1}$ such that
205 $\alpha[\alpha^{-1}[x]]=x$ for every~$x\in A$. Ranking and unranking can be performed
206 by a~simple lookup in these arrays: $R_A(x)=\alpha^{-1}[x]$, $R^{-1}(i)=\alpha[i]$.
207 When we want to delete an~element, we exchange it with the last element in the
208 array~$\alpha$ and update~$\alpha^{-1}$ accordingly.
211 %--------------------------------------------------------------------------------
213 \section{Ranking of \iftoc $k$\else{\secitfont k\/}\fi-permutations}
216 The ideas from the previous section can be also generalized to lexicographic ranking of
217 \df{$k$-permutations,} that is of ordered $k$-tuples of distinct elements drawn from the set~$[n]$.
218 There are $n^{\underline k} = n\cdot(n-1)\cdot\ldots\cdot(n-k+1)$
219 such $k$-permutations and they have a~recursive structure similar to the one of
220 the permutations. We will therefore use the same recursive scheme as before
221 (algorithms \ref{rankalg} and \ref{unrankalg}), but we will modify the first step of both algorithms
222 to stop after the first~$k$ iterations. We will also replace the number $(n-i)!$
223 of permutations on the remaining elements by the number of $(k-i)$-permutations on the same elements,
224 i.e., by $(n-i)^{\underline{k-i}}$. As $(n-i)^{\underline{k-i}} = (n-i) \cdot (n-i-1)^{\underline{k-i-1}}$,
225 we can precalculate all these values in linear time.
227 Unfortunately, the ranks of $k$-permutations can be much smaller, so we can no
228 longer rely on the same data structure fitting in a constant number of word-sized integers.
229 For example, if $k=1$, the ranks are $\O(\log n)$-bit numbers, but the data
230 structure still requires $\Theta(n\log n)$ bits.
232 We do a minor side step by remembering the complement of~$A$ instead, that is
233 the set of the at most~$k$ elements we have already seen. We will call this set~$H$
234 (because it describes the ``holes'' in~$A$). Let us prove that $\Omega(k\log n)$ bits
235 are needed to represent the rank, so the vector representation of~$H$ certainly fits in
236 a~constant number of words.
239 The number of $k$-permutations on~$[n]$ is $2^{\Omega(k\log n)}$.
242 We already know that there $n^{\underline k}$ such $k$-permutations. If $k\le n/2$,
243 then every term in the product is $n/2$ or more, so $\log n^{\underline k} \ge
244 k\cdot (\log n - 1)$. If $k\ge n/2$, then $n^{\underline k} \ge n^{\underline{\smash{n/2}}}$
245 and $\log n^{\underline k} \ge (n/2)(\log n - 1) \ge (k/2)(\log n - 1)$.
249 It remains to show how to translate the operations on~$A$ to operations on~$H$,
250 again stored as a~sorted vector~${\bf h}$. Insertion to~$A$ correspond to
251 deletion from~$H$ and vice versa. The rank of any~$x\in[n]$ in~$A$ is $x$ minus
252 the number of holes that are smaller than~$x$, therefore $R_A(x)=x-R_H(x)$.
253 To calculate $R_H(x)$, we can again use the vector operation \<Rank> from Algorithm \ref{vecops},
254 this time on the vector~$\bf h$.
256 The only operation, which we cannot translate directly, is unranking in~$A$. We will
257 therefore define an~auxiliary vector~$\bf r$ of the same size as~$\bf h$,
258 containing the ranks of the holes: $r_i=R_A(h_i)=h_i-R_H(h_i)=h_i-i$.
259 To find the $j$-th smallest element of~$A$, we locate the interval between
260 holes to which this element belongs: the interval is bordered from below by
261 a~hole~$h_i$ such that $i$ is the largest index satisfying~$r_i \le j$.
262 In other words, $i=\<Rank>(r,j+1)-1$. Finding the right element in the interval
263 is then easy: $R^{-1}_A(j) = h_i + 1 + j - r_i$.
266 If $A=\{2,5,6\}$ and $n=8$, then ${\bf h}=(1,3,4,7,8)$ and ${\bf r}
267 = (0,1,1,3,3)$. When we want to calculate $R^{-1}_A(2)$, we find $i=2$ and
268 the wanted element is $h_2+1+2-r_2 = 4+1+2-1 = 6$.
271 The vector~$\bf r$ can be updated in constant time whenever an~element is
272 inserted to~$\bf h$. It is sufficient to shift the fields apart (we know
273 that the position of the new element in~$\bf r$ is the same as in~$\bf h$),
274 insert the new value using masking operations and decrease all higher fields
275 by one in parallel by using a~single subtraction. Updates after deletions
276 from~$\bf h$ are analogous.
278 We have replaced all operations on~$A$ by the corresponding operations on the
279 modified data structure, each of which works again in constant time. Therefore
280 we have just proven the following theorem, which brings this section to
283 \thmn{Lexicographic ranking of $k$-permutations}
284 When we order the $k$-per\-mu\-ta\-tions on the set~$[n]$ lexicographically, both
285 ranking and unranking can be performed on the RAM in time~$\O(k)$.
288 We modify algorithms \ref{rankalg} and \ref{unrankalg} for $k$-permutations as
289 shown at the beginning of this section. We use the vectors $\bf h$ and~$\bf r$
290 described above as an~implicit representation of the set~$A$. The modified
291 algorithm uses recursion $k$~levels deep and as each operation on~$A$ can be
292 performed in~$\O(1)$ time using $\bf h$ and~$\bf r$, every level takes only
293 constant time. The time bound follows. \qed
295 %--------------------------------------------------------------------------------
297 \section{Restricted permutations}
299 Another interesting class of combinatorial objects that can be counted and
300 ranked are restricted permutations. An~archetypal member of this class are
301 permutations without a~fixed point, i.e., permutations~$\pi$ such that $\pi(i)\ne i$
302 for all~$i$. These are also called \df{derangements} or \df{hatcheck permutations.}\foot{%
303 As the story in~\cite{matnes:idm} goes, once upon a~time there was a~hatcheck lady who
304 was so confused that she was giving out the hats completely at random. What is
305 the probability that none of the gentlemen receives his own hat?} We will present
306 a~general (un)ranking method for any class of restricted permutations and
307 derive a~linear-time algorithm for the derangements from it.
310 We will fix a~non-negative integer~$n$ and use ${\cal P}$ for the set of
311 all~permutations on~$[n]$.
312 A~\df{restriction graph} is a~bipartite graph~$G$ whose parts are two copies
313 of the set~$[n]$. A~permutation $\pi\in{\cal P}$ satisfies the restrictions
314 if $(i,\pi(i))$ is an~edge of~$G$ for every~$i$.
316 \paran{Boards and rooks}%
317 We will follow the path unthreaded by Kaplansky and Riordan
318 \cite{kaplansky:rooks} and charted by Stanley in \cite{stanley:econe}.
319 We will relate restricted permutations to placements of non-attacking
320 rooks on a~hollow chessboard.
324 \:A~\df{board} is the grid $B=[n]\times [n]$. It consists of $n^2$ \df{squares.}
325 \:A~\df{trace} of a~permutation $\pi\in{\cal P}$ is the set of squares \hbox{$T(\pi)=\{ (i,\pi(i)) ; i\in[n] \}$. \hskip-4em} %%HACK
329 The traces of permutations (and thus the permutations themselves) correspond
330 exactly to placements of $n$ rooks at the board in a~way such that the rooks do
331 not attack each other (i.e., there is at most one rook in every row and
332 likewise in every column; as there are $n$~rooks, there must be exactly one of them in
333 every row and column). When speaking about \df{rook placements,} we will always
334 mean non-attacking placements.
336 Restricted permutations then correspond to placements of rooks on a~board with
337 some of the squares removed. The \df{holes} (missing squares) correspond to the
338 non-edges of~$G$, so $\pi\in{\cal P}$ satisfies the restrictions iff
339 $T(\pi)$ avoids the holes.
342 Let~$H\subseteq B$ be any set of holes in the board. Then:
344 \:$N_j$ denotes the number of placements of $n$~rooks on the board such that exactly~$j$ of the rooks
345 stand on holes. That is:
346 $$N_j := \bigl\vert\bigl\{ \pi\in{\cal P} \mathbin{\bigl\vert} \vert H\cap T(\pi) \vert = j \bigr\}\bigr\vert.$$
347 \:$r_k$ is the number of ways how to place $k$~rooks on the holes. In other words,
348 this is the number of $k$-element subsets of~$H$ such that no two elements share
349 a~common row or column.
350 \:$N$ is the generating function for the~$N_j$'s:
352 N(x) = \sum_{j\ge 0} N_j x^j.
354 As $N_j=0$ for $j>n$, this function is in fact a~finite polynomial.
357 \thmn{The number of restricted permutations, Stanley \cite{stanley:econe}}
358 The function~$N$ can be expressed in terms of the numbers~$r_k$ as:
360 N(x) = \sum_{k=0}^n r_k \cdot (n-k)! \cdot (x-1)^k.
364 If two polynomials of degree~$n$ coincide at more than~$n$ points, they
365 are identical, therefore it is sufficient to prove that the equality holds
366 for all $x\in{\bb N}^+$.
367 The $N(x)$ counts the ways of placing~$n$ rooks on the board and labeling
368 each of them which stands on a~hole with an~element of~$[x]$. The right-hand
369 side counts the same: We can obtain any such configuration by placing $k$~rooks
370 on~$H$ first, labeling them with elements of~$\{2,\ldots,x\}$, placing
371 additional $n-k$ rooks on the remaining rows and columns (there are $(n-k)!$ ways
372 how to do this) and labeling those of the new rooks standing on a~hole with~1.
376 When we substitute~$x=0$ in the above equality, we get a~formula for the
377 number of rook placements avoiding the holes altogether:
378 $$N_0 = N(0) = \sum_{k=0}^n (-1)^k \cdot (n-k)! \cdot r_k.$$
380 \example\id{hatcheck}%
381 Let us apply this theory to the hatcheck lady problem. The set~$H$ of holes is the main diagonal
382 of the board: $H=\{ (i,i) \mid i\in[n] \}$. When we want to place $k$~rooks on the holes,
383 we can do that in $r_k={n\choose k}$ ways. By the previous corollary, the number of
386 N_0 = \sum_{k=0}^n (-1)^k \cdot (n-k)! \cdot {n\choose k}
387 = \sum_{k=0}^n (-1)^k \cdot {n!\over k!}
388 = n! \cdot \sum_{k=0}^n {(-1)^k\over k!}.
390 As the sum converges to~$1/e$ when $n$~approaches infinity, we know that the number
391 of derangements is asymptotically $n!/e$.
393 \paran{Matchings and permanents}\id{matchper}%
394 Placements of~$n$ rooks (and therefore also restricted permutations) can be
395 also equated with perfect matchings in the restriction graph~$G$. The edges
396 of the matching correspond to the squares occupied by the rooks, the condition
397 that no two rooks share a~row nor column translates to the edges not touching
398 each other, and the use of exactly~$n$ rooks is equivalent to the matching
401 There is also a~well-known correspondence between the perfect matchings
402 in a~bipartite graph and non-zero summands in the formula for the permanent
403 of the bipartite adjacency matrix~$M$ of the graph. This holds because the
404 non-zero summands are in one-to-one correspondence with the placements
405 of~$n$ rooks on the corresponding board. The number $N_0$ is therefore
406 equal to the permanent of the matrix~$M$.
408 We will summarize our observations by the following lemma:
411 The following sets have the same cardinality:
414 \:permutations that obey a~given restriction graph~$G$,
415 \:non-attacking placements of rooks on a~$n\times n$ board avoiding holes
416 that correspond to non-edges of~$G$,
417 \:perfect matchings in the graph~$G$,
418 \:non-zero summands in the permanent of the adjacency matrix of~$G$.
422 Follows from \ref{rooksobs} and~\ref{matchper}.
426 The diversity of the characterizations of restricted permutations brings
427 both good and bad news. The good news is that we can use the
428 plethora of known results on bipartite matchings. Most importantly, we can efficiently
429 determine whether there exists at least one permutation satisfying a~given set of restrictions:
432 There is an~algorithm which decides in time $\O(n^{1/2}\cdot m)$ whether there exists
433 a~permutation satisfying a~given restriction graph. The $n$ and~$m$ are the number
434 of vertices and edges of the restriction graph.
437 It is sufficient to verify that there exists a~perfect matching in the
438 given graph. By a~standard technique, this can be reduced in linear time to finding a~maximum
439 flow in a~suitable unit-capacity network. This flow can be then found using the Dinic's
440 algorithm in time $\O(\sqrt{n}\cdot m)$.
441 (See Dinic \cite{dinic:flow} for the flow algorithm, Even and Tarjan \cite{even:dinic} for the time bound
442 and Schrijver \cite{schrijver} for more references on flows and matchings.)
446 The bad news is that computing the permanent is known to be~$\#\rm P$-complete even
447 for zero-one matrices (as proven by Valiant \cite{valiant:permanent}).
448 As a~ranking function for a~set of~matchings can be used to count all such
449 matchings, we obtain the following theorem:
452 If there is a~polynomial-time algorithm for lexicographic ranking of permutations with
453 a~set of restrictions which is a~part of the input, then $\rm P=\#P$.
456 We will show that a~polynomial-time ranking algorithm would imply a~po\-ly\-nom\-ial-time
457 algorithm for computing the permanent of an~arbitrary zero-one matrix, which
458 is a~$\#\rm P$-complete problem.
460 We know from Lemma \ref{permchar} that non-zero
461 summands in the permanent of a~zero-one matrix~$M$ correspond to permutations restricted
462 by a~graph~$G$ whose bipartite adjacency matrix is~$M$. The permanent is
463 therefore equal to the number of such permutations, which is one more than the
464 rank of the lexicographically maximum such permutation.
465 It therefore remains to show that we can find the lexicographically maximum
466 permutation permitted by~$G$ in polynomial time.
467 \looseness=1 %%HACK%%
469 We can determine $\pi[1]$ by trying all the possible values permitted by~$G$
470 in decreasing order and stopping as soon as we find~$\pi[1]$ which can be
471 extended to a~complete permutation. This can be verified using the previous
472 theorem on~the graph of the remaining restrictions, i.e., on~$G$ with the vertices
473 1~on one side and~$\pi[1]$ on the other side removed.
474 Once we have~$\pi[1]$, proceed by finding $\pi[2]$ in the same way, using the reduced
475 graph. This way we construct the whole maximum permutation~$\pi$
476 in~$\O(n^2)$ calls to the verification algorithm.
479 \paran{Recursive structure}%
480 However, the hardness of computing the permanent is the only obstacle.
481 We will show that whenever we are given a~set of restrictions for which
482 the counting problem is easy (and it is also easy for subgraphs obtained
483 by deleting vertices), ranking is easy as well. The key will be once again
484 a~recursive structure, similar to the one we have seen in the case of plain
485 permutations in \ref{permrec}.
486 \looseness=1 %%HACK%%
489 As we will work with permutations on different sets simultaneously, we have
490 to extend our notation accordingly. For every finite set of elements $A\subset{\bb N}$,
491 we will consider the set ${\cal P}_A$ of all permutations on~$A$, as usually
492 viewed as ordered $\vert A\vert$-tuples. The restriction graph will be represented
493 by its adjacency matrix~$M\in \{0,1\}^{\vert A\vert\times \vert A\vert}$ and
494 a~permutation $\pi\in{\cal P}_A$ satisfies~$M$ (conforms to the restrictions)
495 iff $M[i,j]=1$ whenever $j=R_A(\pi[i])+1$.\foot{The $+1$ is added because
496 matrices are indexed from~1 while the lowest rank is~0.}
497 The set of all such~$\pi$ will be denoted by~${\cal P}_{A,M}$
498 and their number (which obviously does not depend on the choice of~$A$) by $N_0(M) = {\per M}$.
500 We will also frequently need to delete a~row and a~column simultaneously
501 from~$M$. This operation corresponds to deletion of one vertex from each
502 part of the restriction graph. We will write $M^{i,j}$ for the matrix~$M$
503 with its $i$-th row and $j$-th column removed.
506 Let us consider a~permutation $\pi\in{\cal P}_A$ and $n=\vert A\vert$.
507 When we fix the value of the element $\pi[1]$, the remaining elements form
508 a~permutation $\pi'=\pi[2\ldots n]$ on the set~$A'=A\setminus\{\pi[1]\}$.
509 The permutation~$\pi$ satisfies the restriction matrix~$M$ if and only if
510 $M[1,a]=1$ for $a=R_A(\pi[1])$ and $\pi'$ satisfies a~restriction matrix~$M'=M^{1,a}$.
511 This translates to the following counterparts of algorithms \ref{rankalg}
516 $\<Rank>(\pi,i,n,A,M)$: Compute the lexicographic rank of a~permutation $\pi[i\ldots n]\in{\cal P}_{A,M}$.
519 \:If $i\ge n$, return 0.
521 \:$b\=C_a=\sum_k N_0(M^{1,k})$ over all $k$ such that $1\le k\le a$ and \hbox{$M[1,k]=1$.\kern-3pt} %%HACK
522 \cmt{$C_a$ is the number of permutations in ${\cal P}_{A,M}$ whose first element lies
523 among the first $a$ elements of~$A$.}
524 \:Return $b + \<Rank>(\pi,i+1,n,A\setminus\{\pi[i]\},M^{1,a+1})$.
527 \>To calculate the rank of~$\pi\in{\cal P}_{A,M}$, we call $\<Rank>(\pi,1,\vert A\vert,A,M)$.
530 $\<Unrank>(j,i,n,A,M)$: Return an~array~$\pi$ such that $\pi[i,\ldots,n]$ is the $j$-th
531 permutation in~${\cal P}_{A,M}$.
534 \:If $i>n$, return $(0,\ldots,0)$.
535 \:Find minimum $a$ such that $C_a > j$ (where $C_a$ is as in \<Rank> above).
536 \:$x\=R^{-1}_A(a-1)$.
537 \:$\pi\=\<Unrank>(j-C_{a-1}, i+1, n, A\setminus\{x\}, M^{1,a})$.
542 \>To find the $j$-th permutation in~${\cal P}_{A,M}$, we call $\<Unrank>(j,1,\vert A\vert,A,M)$.
545 The time complexity of these algorithms will be dominated by the computation of
546 the numbers $C_a$, which requires a~linear amount of calls to~$N_0$ on every
547 level of recursion, and by the manipulation with matrices. Because of this,
548 we do not need any sophisticated data structure for the set~$A$, an~ordinary sorted array
549 will suffice. (Also, we cannot use the vector representation blindly, because
550 we have no guarantee that the word size is large enough.)
552 \thmn{Lexicographic ranking of restricted permutations}
553 Suppose that we have a~family of matrices ${\cal M}=\{M_1,M_2,\ldots\}$ such that $M_n\in \{0,1\}^{n\times n}$
554 and it is possible to calculate the permanent of~$M'$ in time $\O(t(n))$ for every matrix $M'$
555 obtained by deletion of rows and columns from~$M_n$. Then there exist algorithms
556 for ranking and unranking in ${\cal P}_{A,M_n}$ in time $\O(n^4 + n^2\cdot t(n))$
557 if $M_n$ and an~$n$-element set~$A$ are given as a~part of the input.
560 We will combine the algorithms \ref{rrankalg} and \ref{runrankalg} with the supplied
561 function for computing the permanent. All matrices constructed by the algorithm
562 are submatrices of~$M_n$ of the required type, so all computations of the function~$N_0$
563 can be performed in time $\O(t(n))$ each.
565 The recursion is $n$~levels deep. Every level involves
566 a~constant number of (un)ranking operations on~$A$ and computation of at most~$n$
567 of the $C_a$'s. Each such $C_a$ can be derived from~$C_{a-1}$ by constructing
568 a~submatrix of~$M$ (which takes $\O(n^2)$ time) and computing its $N_0$. We therefore
569 spend time $\O(n^2)$ on operations with the set~$A$, $\O(n^4)$ on matrix manipulations
570 and $\O(n^2\cdot t(n))$ by the computations of the~$N_0$'s.
573 \paran{Approximation}%
574 In cases where efficient evaluation of the permanent is out of our reach,
575 we can consider using the fully-polynomial randomized approximation scheme
576 for the permanent described by Jerrum, Sinclair and Vigoda \cite{jerrum:permanent}.
577 They have described a~randomized algorithm that for every $\varepsilon>0$
578 approximates the value of the permanent of an~$n\times n$ matrix with non-negative
579 entries. The output is within relative error~$\varepsilon$ from the correct value with
580 probability at least~$1/2$ and the algorithm runs in time polynomial in~$n$ and~$1/\varepsilon$.
581 From this, we can get a~similar approximation scheme for the ranks.
583 \paran{Special restriction graphs}%
584 There are also deterministic algorithms for computing the number of perfect matchings
585 in various special graph families (which imply polynomial-time ranking algorithms for
586 the corresponding families of permutations). If the graph is planar, we can
587 use the Kasteleyn's algorithm \cite{kasteleyn:crystals} based on Pfaffian
588 orientations which runs in time $\O(n^3)$.
589 It has been recently extended to arbitrary surfaces by Yuster and Zwick
590 \cite{yuster:matching} and sped up to $\O(n^{2.19})$. The counting problem
591 for arbitrary minor-closed classes (cf.~Section \ref{minorclosed}) is still
594 %--------------------------------------------------------------------------------
596 \section{Hatcheck lady and other derangements}
598 The time bound for ranking of general restricted permutations shown in the previous
599 section is obviously very coarse. Its main purpose was to demonstrate that
600 many special cases of the ranking problem can be indeed computed in polynomial time.
601 For most families of restriction matrices, we can do much better. One of the possible improvements
602 is replacing the matrix~$M$ by the corresponding restriction graph and instead of
603 copying the matrix at every level of recursion, we can perform local operations on the graph
604 and undo them later. Another useful trick is to calculate the $N_0$'s of the smaller
605 matrices using information already computed for the larger matrices.
607 These speedups are hard to state formally in general (they depend on the
608 structure of the matrices), so we will concentrate on a~specific example
609 instead. We will show that for the derangements one can achieve linear time complexity.
612 As we already know, the hatcheck permutations correspond to restriction
613 matrices that contain zeroes only on the main diagonal, and to graphs that are
614 complete bipartite with the matching $\{(i,i) \mid i\in[n]\}$ deleted. For
615 a~given order~$n$, we will call this matrix~$D_n$ and the graph~$G_n$ and
616 we will show that the submatrices of~$D_n$ share several nice properties:
618 \lemma\id{submatrixlemma}%
619 Let $D$ be a~submatrix of~$D_n$ obtained by deletion of rows and columns.
620 Then the value of the permanent of~$D$ depends only on the size of~$D$
621 and on the number of zero entries in~$D$.
624 We know from Lemma~\ref{permchar} that the permanent counts matchings in the
625 graph~$G$ obtained from~$G_n$ by removing the vertices corresponding to the
626 deleted rows and columns of~$D_n$. Therefore we can prove the lemma for
627 the number of matchings instead.
629 As~$G_n$ is a~complete bipartite graph without edges of a~single perfect matching,
630 the graph~$G$ must be also complete bipartite with some non-touching edges
631 missing. Two such graphs $G$ and~$G'$ are therefore isomorphic if and only if they have the
632 same number of vertices and also the same number of missing edges. As the
633 number of matchings is an~isomorphism invariant, the lemma follows.
637 There is a~clear combinatorial intuition behind this lemma: if we are
638 to count permutations with restrictions placed on~$z$ elements and these
639 restrictions are independent, it does not matter how exactly they look like.
642 Let $n_0(z,d)$ be the permanent shared by all submatrices as described
643 by the above lemma, which have $d\times d$ entries and exactly~$z$ zeroes.
646 The function~$n_0$ satisfies the following recurrence:
649 n_0(d,d) &= d! \cdot \sum\nolimits_{k=0}^d {(-1)^k \over k!}, \cr
651 n_0(z,d) &= z\cdot n_0(z-1,d-1) + (d-z)\cdot n_0(z,d-1) \quad\hbox{for $z<d$.} \cr
655 The base cases of the recurrence are straightforward: $n_0(0,d)$ counts the
656 unrestricted permutations on~$[d]$, and $n_0(d,d)$ is equal to the number of derangements
657 on~$[d]$, which we have already computed in Example \ref{hatcheck}. Let us
658 prove the third formula.
660 We will count the permutations~$\pi$ restricted by a~matrix~$M$ of the given parameters
661 $z$ and~$d$. As $z<d$, there is at least one position in the permutation for which
662 no restriction applies and by Lemma~\ref{submatrixlemma} we can choose without
663 loss of generality that it is the first position.
665 If we select $\pi[1]$ from the~$z$ restricted elements, the rest of~$\pi$ is a~permutation
666 on the remaining elements with one restriction less and there are $n_0(z-1,d-1)$ such
667 permutations. On the other hand, if we use an~unrestricted element, all restrictions
668 stay in effect, so there are~$n_0(z,d-1)$ ways how to do so.
672 The function~$n_0$ also satisfies the following recurrence:
674 n_0(z-1,d) = n_0(z,d) + n_0(z-1,d-1) \quad\hbox{for $z>0$, $d>0$.} \eqno{(\maltese)}
678 We will again take advantage of having proven Lemma~\ref{submatrixlemma}, which
679 allows us to choose arbitrary matrices with the given parameters. Let us pick a~matrix~$M_z$
680 containing $z$~zeroes such that $M_z[1,1]=0$. Then define~$M_{z-1}$ which is equal to~$M_z$
681 everywhere except $M_{z-1}[1,1]=1$.
683 We will count the permutations $\pi\in {\cal P}_d$ satisfying~$M_{z-1}$ in two ways.
684 First, there are $n_0(z-1,d)$ such permutations. On the other hand, we can divide
685 the them to two types depending on whether $\pi[1]=1$. Those having $\pi[1]\ne 1$
686 are exactly the $n_0(z,d)$ permutations satisfying~$M_z$. The others correspond to
687 permutations $(\pi[2],\ldots,\pi[d])$ on $\{2,\ldots,d\}$ that satisfy~$M_z^{1,1}$,
688 so there are $n_0(z-1,d-1)$ of them.
691 \cor\id{nzeroprecalc}%
692 For a~given~$n$, a~table of the values $n_0(z,d)$ for all $0\le z\le d\le n$
693 can be precomputed in time~$\O(n^2)$.
696 Use either recurrence and induction on~$z+d$.
700 For every $0\le z<d$ we have $n_0(z,d) - n_0(z+1,d) \le n_0(z,d)/d$.
703 According to the recurrence $(\maltese)$, the difference $n_0(z,d) - n_0(z+1,d)$ is
704 equal to $n_0(z,d-1)$. We can bound this by plugging the trivial inequality $n_0(z,d-1) \le n_0(z-1,d-1)$
705 to~$(*)$, from which we obtain $n_0(z,d) \ge d\cdot n_0(z,d-1)$.
708 \paran{The algorithm}\id{rrankmod}%
709 Let us show how to modify the ranking algorithm (\ref{rrankalg}) using the insight
710 we have gained into the structure of derangements.
712 The algorithm uses the matrix~$M$ only for computing~$N_0$ of its submatrices
713 and we have shown that this value depends only on the order of the matrix and
714 the number of zeroes in it. We will therefore replace maintenance of the matrix
715 by remembering the number~$z$ of its zeroes and the set~$Z$ that contains the elements
716 $x\in A$ whose locations are restricted (there is a~zero anywhere in the $(R_A(x)+1)$-th
717 column of~$M$). In other words, every $x\in Z$ can appear at all positions in the
718 permutation except one (and these forbidden positions are different for different~$x$'s),
719 while the remaining elements of~$A$ can appear anywhere.
721 As we already observed (\ref{hatcheck}) that the number of derangements on~$[n]$ is $\Theta(n!)$,
722 we can again use word-sized vectors to represent the sets~$A$ and~$Z$ with insertion,
723 deletion, ranking and unranking on them in constant time.
725 When the algorithm selects a~submatrix $M'=M^{1,k}$ for an~element~$x$ whose rank is~$k-1$, this
726 matrix it is described by either by the choice of $z'=z-1$ and~$Z'=Z\setminus\{x\}$ (if $x\in Z$)
727 or $z'=z$ and $Z'=Z$ (if $x\not\in Z$).
728 All computations of~$N_0$ in the algorithm can be therefore replaced by looking
729 up the appropriate $n_0(z',\vert A\vert-1)$ in the precomputed table. Moreover, we can
730 calculate a~single~$C_a$ in constant time, because all summands are either $n_0(z,\vert A\vert-1)$
731 or $n_0(z-1,\vert A\vert-1)$, depending on the set~$Z$. We get:
732 $$C_a = r\cdot n_0(z-1,\vert A\vert-1) + (a-r) \cdot n_0(z,\vert A\vert-1),$$
733 where $r=R_Z(R^{-1}_A(a))$, that is the number of restricted elements among the $a$~smallest ones in~$A$.
735 All operations at a~single level of the \<Rank> function now run in constant time,
736 but \<Unrank> needs to search among the~$C_a$'s to find the first of them which
737 exceeds the given rank. We could use binary search, but that would take $\Theta(\log n)$
738 time. There is however a~clever trick: the value of~$C_a$ does not vary too much with
739 the set~$Z$. Specifically, by Corollary~\ref{smalldiff} the difference between the values
740 for $Z=\emptyset$ and $Z=A$ is at most $n_0(z-1,\vert A\vert -1)$. It is therefore
741 sufficient to just divide the rank by $n_0(z-1,\vert A\vert-1)$ and we get either
742 the correct value of~$a$ or one more. Both possibilities can be checked in constant time.
744 We can therefore conclude this section by the following theorem:
746 \thmn{Ranking of derangements}%
747 For every~$n$, the derangements on the set~$[n]$ can be ranked and unranked according to the
748 lexicographic order in time~$\O(n)$ after spending $\O(n^2)$ on initialization of auxiliary tables.
751 We modify the general algorithms for (un)ranking of restricted permutations (\ref{rrankalg} and \ref{runrankalg})
752 as described above (\ref{rrankmod}). Each of the $n$~levels of recursion will then run in constant time. The values~$n_0$ will
753 be looked up in a~table precalculated in quadratic time as shown in Corollary~\ref{nzeroprecalc}.