+\lemma\id{verhe}%
+For an~arbitrary active top~$t$, the corresponding entry of~$H_e$ can be
+extracted in constant time.
+
+\proof
+We look up the precomputed depth~$d$ of~$t$ first.
+If $H_e$ is stored in a~big list, we extract the $d$-th entry of the list.
+If the list is small, we find the position of the particular field
+by counting bits of the top mask~$M_e$ at position~$d$ and higher
+(this is \<Weight> of $M_e$ with the lower bits masked out).
+\qed
+
+\lemma\id{verfh}%
+The procedure \<FindHeavy> processes an~edge~$e$ in time $\O(\log \vert T_e\vert + q_e)$,
+where $q_e$~is the number of query paths having~$e$ as its bottommost edge.
+
+\proof
+The edge is examined in steps 1, 2, 4 and~5 of the algorithm. Step~4 is trivial
+as we have already computed the top masks and we can reconstruct the entries
+of~$T_e$ in constant time (Lemma \ref{verth}). We have to perform the other
+in constant time if $H_e$ is a~small list or $\O(\log\log n)$ if it is big.
+
+\em{Step 5} involves binary search on~$H_e$ in $\O(\log\vert T_e\vert)$ comparisons,
+each of them indexes~$H_e$, which is $\O(1)$ by Lemma \ref{verth}. Rewriting the
+lighter edges is $\O(1)$ for small lists by replication and bit masking, for a~big
+list we do the same for each of its slot.
+
+\em{Step 1} looks up $q_e$~tops in~$H_e$ and we already know from Lemma \ref{verhe}
+how to do that in constant time.
+
+\em{Step 2} is the only non-trivial one. We already know which tops to select
+(we have the top masks $M_e$ and~$M_p$ precomputed), but we have to carefully
+extract the sublist.
+We have to handle these four cases:
+
+\itemize\ibull
+\:\em{Small from small:} We use \<Select> to find the fields of~$H_p$ which
+shall be deleted. Then we apply \<SubList> to actually delete them. Pointers
+can be retained as they still refer to the same ancestor list.
+
+\:\em{Big from big:} We can copy the whole~$H_p$, since the layout of the
+big lists is fixed and the items we do not want simply end up as unused
+fields in~$H_e$.
+
+\:\em{Small from big:} We use the operation \<Bits> to construct a~list
+of pointers (we use bit masking to add the ``this is a~pointer'' flags).
+
+\:\em{Big from small:} First we have to dereference the pointers in the
+small list~$S$. For each slot~$B_i$ of the ancestor big list, we construct
+a~subvector of~$S$ containing only the pointers referring to that slot,
+adjusted to be relative to the beginning of the slot (we use \<Compare>
+and \<Replicate> from Algorithm \ref{vecops} and bit masking). Then we
+use a~precomputed table to replace the pointers by the fields of~$B_i$
+they point to. Finally, we $\bor$ together the partial results.
+
+Finally, we have to spread the fields of the small list to the whole big list.
+This is similar: for each slot of the big list, we find the part of the small
+list keeping the fields we want (\<Weight> on the sub-words of~$M_e$ before
+and above the intended interval of depths) and we use a~precomputed table
+to shift the fields to the right locations in the slot (controlled by the
+sub-word of~$M_e$ in the intended interval).
+\qeditem
+\endlist
+
+\thmn{Verification of MST on the RAM}\id{ramverify}%
+There is a~RAM algorithm, which for every weighted graph~$G$ and its spanning tree~$T$
+determines whether~$T$ is minimum and finds all $T$-light edges in~$G$.
+
+\proof
+Implement the Koml\'os's algorithm from Theorem \ref{verify} with the data
+structures developed in this section.
+According to Lemma \ref{verfh}, it runs in time $\sum_e \O(\log\vert T_e\vert + q_e)
+= \O(\sum_e \log\vert T_e\vert) + \O(\sum_e q_e)$. The second sum is $\O(m)$
+as there are $\O(1)$ query paths per edge, the first sum is $\O(\#\hbox{comparisons})$,
+which is $\O(m)$ by Theorem \ref{verify}.
+\qed