going from the $(i+1)$-th to the $i$-th level of the tree.
The levels are numbered from the bottom, so leaves are at level~0, the root
is at level $\ell\le \lceil \log_2 n\rceil$ and there are $n_i\le n/2^i$ vertices
-at the $i$-th level, so we consider exactly $n_i$ edges.
+at the $i$-th level, so we consider exactly $n_i$ edges. To avoid taking a~logarithm
+of zero, we will define $\vert T_e\vert=1$ for $T_e=\emptyset$.
\def\eqalign#1{\null\,\vcenter{\openup\jot
\ialign{\strut\hfil$\displaystyle{##}$&$\displaystyle{{}##}$\hfil
\crcr#1\crcr}}\,}
\para
When we combine this lemma with the above reduction, we get the following theorem:
-\thmn{Verification of the MST, Koml\'os \cite{komlos:verify}}
+\thmn{Verification of the MST, Koml\'os \cite{komlos:verify}}\id{verify}%
For every weighted graph~$G$ and its spanning tree~$T$, it is sufficient to
perform $\O(m)$ comparisons of edge weights to determine whether~$T$ is minimum
and to find all $T$-light edges in~$G$.
We will show soon that it is possible to precompute a~table of any reasonable
function whose arguments fit in two slots.
-\em{Top masks:} The array~$T_e$ will be represented by bit masks. For each
+\em{Top masks:} The array~$T_e$ will be represented by a~bit mask~$M_e$ called the \df{top mask.} For each
of the possible tops~$t$ (i.e., the ancestors of the current vertex), we store
-a~single bit telling whether $t\in T_e$. Each bit mask fits in $\lceil\log n\rceil$
+a~single bit telling whether $t\in T_e$. Each top mask fits in $\lceil\log n\rceil$
bits and therefore in a~single machine word. If needed, it can be split to three slots.
\em{Small and big lists:} The heaviest edge found so far for each top is stored
we store the labels of these edges in a~list encoded in a~bit string.
Depending on the size of the list, we use two one of two possible encodings:
\df{Small lists} are stored in a~vector which fits in a~single slot, with
-the unused entries filled by a~special constant, so that we can infer the
+the unused fields filled by a~special constant, so that we can infer the
length of the list.
If the data do not fit in a~small list, we use a~\df{big list} instead, which
is stored in $\O(\log\log n)$ words, each of them containing a~slot-sized
vector. Unlike the small lists, we use the big lists as arrays. If a~top~$t$ of
depth~$d$ is active, we keep the corresponding entry of~$H_e$ in the $d$-th
-entry of the big list. Otherwise, we keep that entry unused.
+field of the big list. Otherwise, we keep that entry unused.
We will want to perform all operations on small lists in constant time,
but we can afford spending time $\O(\log\log n)$ on every big list. This
this problem, we will introduce \df{pointers} as another kind of edge identifiers.
A~pointer is an~index to the nearest big list on the path from the small
list containing the pointer to the root. As each big list has at most $\lceil\log n\rceil$
-entries, the pointer fits in~$\ell$ bits, but we need one extra bit to distinguish
+fields, the pointer fits in~$\ell$ bits, but we need one extra bit to distinguish
between normal labels and pointers.
\lemma
$k$-th one.
\:$\<Bits>(m)$ --- for a~slot-sized bit mask~$m$, it returns a~small list
-of the positions of bits set in~$m$.
+of the positions of bits set in~$\(m)$.
+
+\:$\<Select>(x,m)$ --- constructs a~slot containing the substring of $\(x)$
+selected by the bits set in~$\(m)$.
\:$\<SubList>(x,m)$ --- when~$x$ is a~small list and~$m$ a bit mask, it returns
a~small list containing the elements of~$x$ selected by the bits set in~$m$.
\para
We will now show how to perform all parts of the procedure \<FindHeavy>
-in the required time.
+in the required time. We will denote the size of the tree by~$n$ and the
+number of query paths by~$q$.
\lemma
+Depths of all vertices and all top masks can be computed in time $\O(n+q)$.
\proof
+Run depth-first search on the tree, assign the depth of a~vertex when entering
+it and construct its top mask when leaving it. The top mask can be obtained
+by $\bor$-ing the masks of its sons, excluding the level of the sons and
+including the tops of all query paths which have bottom at the current vertex
+(the depths of the tops are already assigned).
\qed
-\lemma
-The array $H_e$ can be indexed in constant time.
+\lemma\id{verth}%
+The arrays $T_e$ and~$H_e$ can be indexed in constant time.
\proof
+Indexing~$T_e$ is exactly the operation \<FindKth> applied on the corresponding
+top mask~$M_e$.
+
+If $H_e$ is stored in a~big list, we calculate the index of the particular
+slot and the position of the field inside the slot. This field can be then
+extracted using bit masking and shifts.
+
+If it is a~small list, we extract the field directly, but we have to
+dereference it in case it is a pointer. We modify the recursion in \<FindHeavy>
+to pass the depth of the lowest edge endowed with a~big list and when we
+encounter a~pointer, we index this big list.
\qed
+\lemma\id{verhe}%
+For an~arbitrary active top~$t$, the corresponding entry of~$H_e$ can be
+extracted in constant time.
+
+\proof
+We look up the precomputed depth~$d$ of~$t$ first.
+If $H_e$ is stored in a~big list, we extract the $d$-th entry of the list.
+If the list is small, we find the position of the particular field
+by counting bits of the top mask~$M_e$ at position~$d$ and higher
+(this is \<Weight> of $M_e$ with the lower bits masked out).
+\qed
+
+\lemma\id{verfh}%
+The procedure \<FindHeavy> processes an~edge~$e$ in time $\O(\log \vert T_e\vert + q_e)$,
+where $q_e$~is the number of query paths having~$e$ as its bottommost edge.
+
+\proof
+The edge is examined in steps 1, 2, 4 and~5 of the algorithm. Step~4 is trivial
+as we have already computed the top masks and we can reconstruct the entries
+of~$T_e$ in constant time (Lemma \ref{verth}). We have to perform the other
+in constant time if $H_e$ is a~small list or $\O(\log\log n)$ if it is big.
+
+\em{Step 5} involves binary search on~$H_e$ in $\O(\log\vert T_e\vert)$ comparisons,
+each of them indexes~$H_e$, which is $\O(1)$ by Lemma \ref{verth}. Rewriting the
+lighter edges is $\O(1)$ for small lists by replication and bit masking, for a~big
+list we do the same for each of its slot.
+
+\em{Step 1} looks up $q_e$~tops in~$H_e$ and we already know from Lemma \ref{verhe}
+how to do that in constant time.
+
+\em{Step 2} is the only non-trivial one. We already know which tops to select
+(we have the top masks $M_e$ and~$M_p$ precomputed), but we have to carefully
+extract the sublist.
+We have to handle these four cases:
+
+\itemize\ibull
+\:\em{Small from small:} We use \<Select> to find the fields of~$H_p$ which
+shall be deleted. Then we apply \<SubList> to actually delete them. Pointers
+can be retained as they still refer to the same ancestor list.
+
+\:\em{Big from big:} We can copy the whole~$H_p$, since the layout of the
+big lists is fixed and the items we do not want simply end up as unused
+fields in~$H_e$.
+
+\:\em{Small from big:} We use the operation \<Bits> to construct a~list
+of pointers (we use bit masking to add the ``this is a~pointer'' flags).
+
+\:\em{Big from small:} First we have to dereference the pointers in the
+small list~$S$. For each slot~$B_i$ of the ancestor big list, we construct
+a~subvector of~$S$ containing only the pointers referring to that slot,
+adjusted to be relative to the beginning of the slot (we use \<Compare>
+and \<Replicate> from Algorithm \ref{vecops} and bit masking). Then we
+use a~precomputed table to replace the pointers by the fields of~$B_i$
+they point to. Finally, we $\bor$ together the partial results.
+
+Finally, we have to spread the fields of the small list to the whole big list.
+This is similar: for each slot of the big list, we find the part of the small
+list keeping the fields we want (\<Weight> on the sub-words of~$M_e$ before
+and above the intended interval of depths) and we use a~precomputed table
+to shift the fields to the right locations in the slot (controlled by the
+sub-word of~$M_e$ in the intended interval).
+\qeditem
+\endlist
+
+\thmn{Verification of MST on the RAM}\id{ramverify}%
+There is a~RAM algorithm, which for every weighted graph~$G$ and its spanning tree~$T$
+determines whether~$T$ is minimum and finds all $T$-light edges in~$G$.
+
+\proof
+Implement the Koml\'os's algorithm from Theorem \ref{verify} with the data
+structures developed in this section.
+According to Lemma \ref{verfh}, it runs in time $\sum_e \O(\log\vert T_e\vert + q_e)
+= \O(\sum_e \log\vert T_e\vert) + \O(\sum_e q_e)$. The second sum is $\O(m)$
+as there are $\O(1)$ query paths per edge, the first sum is $\O(\#\hbox{comparisons})$,
+which is $\O(m)$ by Theorem \ref{verify}.
+\qed
\FIXME{Mention online version}
+\FIXME{Mention Buchsbaum}
%--------------------------------------------------------------------------------