\n{$W$}{word size of the RAM \[wordsize]}
\n{$\(x)$}{number~$x\in{\bb N}$ written in binary \[bitnota]}
\n{$\(x)_b$}{$\(x)$ zero-padded to exactly $b$ bits \[bitnota]}
-\n{$x[i]$}{when $x$ is a~number: the value of the $i$-th bit of~$x$ \[bitnota]}
-\n{$\pi[i]$}{when $\pi$ is a~sequence: the $i$-th element of~$x$, starting with $x[1]$ \[brackets]}
+\n{$x[i]$}{when $x\in{\bb N}$: the value of the $i$-th bit of~$x$ \[bitnota]}
+\n{$\pi[i]$}{when $\pi$ is a~sequence: the $i$-th element of~$\pi$, starting with $\pi[1]$ \[brackets]}
\n{$\sigma^k$}{the string~$\sigma$ repeated $k$~times \[bitnota]}
\n{$\0$, $\1$}{bits in a~bit string \[bitnota]}
\n{$\equiv$}{congruence modulo a~given number}
\qed
\example
-If store~$A$ in an~ordinary array, we have insertion and deletion in constant time,
+If we store~$A$ in an~ordinary array, we have insertion and deletion in constant time,
but ranking and unranking in~$\O(n)$, so $t(n)=\O(n)$ and the algorithm is quadratic.
Binary search trees give $t(n)=\O(\log n)$. The data structure of Dietz \cite{dietz:oal}
improves it to $t(n)=O(\log n/\log \log n)$. In fact, all these variants are equivalent