+\para
+The heap algorithms we have just described have been built from primitives
+operating in constant time, with one notable exception: the extraction
+$x[B]$ of all bits of~$x$ at positions specified by the set~$B$. This cannot be done
+in~$\O(1)$ time on the Word-RAM, but we can implement it with ${\rm AC}^0$
+instructions as suggested by Andersson in \cite{andersson:fusion} or even
+with those ${\rm AC}^0$ instructions present on real processors (see Thorup
+\cite{thorup:aczero}). On the Word-RAM, we need to make use of the fact
+that the set~$B$ is not changing too much --- there are $\O(1)$ changes
+per Q-heap operation. As Fredman and Willard have shown, it is possible
+to maintain a~``decoder'', whose state is stored in $\O(1)$ machine words,
+and which helps us to extract $x[B]$ in a~constant number of operations:
+
+\lemman{Extraction of bits}\id{qhxtract}%
+Under the assumptions on~$k$, $W$ and the preprocessing time as in the Q-heaps,\foot{%
+Actually, this is the only place where we need~$k$ to be as low as $W^{1/4}$.
+In the ${\rm AC}^0$ implementation, it is enough to ensure $k\log k\le W$.
+On the other hand, we need not care about the exponent because it can
+be arbitrarily increased using the Q-heap trees described below.}
+it is possible to maintain a~data structure for a~set~$B$ of bit positions,
+which allows~$x[B]$ to be extracted in $\O(1)$ time for an~arbitrary~$x$.
+When a~single element is inserted to~$B$ or deleted from~$B$, the structure
+can be updated in constant time, as long as $\vert B\vert \le k$.
+
+\proof
+See Fredman and Willard \cite{fw:transdich}.
+\qed
+
+\para
+This was the last missing bit of the mechanics of the Q-heaps. We are
+therefore ready to conclude this section by the following theorem
+and its consequences:
+
+\thmn{Q-heaps, Fredman and Willard \cite{fw:transdich}}\id{qh}%
+Let $W$ and~$k$ be positive integers such that $k=\O(W^{1/4})$. Let~$Q$
+be a~Q-heap of at most $k$-elements of $W$~bits each. Then the Q-heap
+operations \ref{qhfirst} to \ref{qhlast} on~$Q$ (insertion, deletion,
+search for a~given value and search for the $i$-th smallest element)
+run in constant time on a~Word-RAM with word size~$W$, after spending
+time $\O(2^{k^4})$ on the same RAM on precomputing of tables.
+
+\proof
+Every operation on the Q-heap can be performed in a~constant number of
+vector operations and calculations of ranks. The ranks are computed
+in $\O(1)$ steps involving again $\O(1)$ vector operations, binary
+logarithms and bit extraction. All these can be calculated in constant
+time using the results of section \ref{bitsect} and Lemma \ref{qhxtract}.
+\qed
+
+\rem
+We can also use the Q-heaps as building blocks of more complex structures
+like Atomic heaps and AF-heaps (see once again \cite{fw:transdich}). We will
+show a~simpler, but useful construction, sometimes called the \df{Q-heap tree.}
+Suppose we have a~Q-heap of capacity~$k$ and a~parameter $d\in{\bb N}^+$. We
+can build a~balanced $k$-ary tree of depth~$d$ such that its leaves contain
+a~given set and every internal vertex keeps the minimum value in the subtree
+rooted in it, together with a~Q-heap containing the values in all its sons.
+This allows minimum to be extracted in constant time (it is placed in the root)
+and when any element is changed, it is sufficient to recalculate the values
+from the path from this element to the root, which takes $\O(d)$ Q-heap
+operations.
+
+\corn{Q-heap trees}\id{qhtree}%
+For every positive integer~$r$ and $\delta>0$ there exists a~data structure
+capable of maintaining the minimum of a~set of at most~$r$ word-sized numbers
+under insertions and deletions. Each operation takes $\O(1)$ time on a~Word-RAM
+with word size $W=\Omega(r^{\delta})$, after spending time
+$\O(2^{r^\delta})$ on precomputing of tables.
+
+\proof
+Choose $\delta' \le \delta$ such that $r^{\delta'} = \O(W^{1/4})$. Build
+a~Q-heap tree of depth $d=\lceil \delta/\delta'\rceil$ containing Q-heaps of
+size $k=r^{\delta'}$. \qed
+
+\rem\id{qhtreerem}%
+When we have an~algorithm with input of size~$N$, the word size is at least~$\log N$
+and we can spend time $\O(N)$ on preprocessing, so we can choose $r=\log N$ and
+$\delta=1$ in the above corollary and get a~heap of size $\log N$ working in
+constant time per operation.
+