+\defn
+
+\em{Vertex identifiers:} Since all vertices referred to by the procedure
+lie on the path from root to the current vertex~$u$, we modify the algorithm
+to keep a~stack of these vertices in an~array and refer to each vertex by its
+index in this array, i.e., by its depth. We will call these identifiers \df{vertex
+labels} and we note that each label require only $\ell=\lceil \log\log n\rceil$
+bits. As every tree edge is uniquely identified by its bottom vertex, we can
+use the same encoding for \df{edge labels.}
+
+\em{Slots:} As we will need several operations which are not computable
+in constant time on the RAM, we precompute tables for these operations
+like we did in the Q-Heaps (cf.~Lemma \ref{qhprecomp}). A~table for word-sized
+arguments would take too much time to precompute, so we will generally store
+our data structures in \df{slots} of $s=1/3\cdot\lceil\log n\rceil$ bits each.
+We will show soon that it is possible to precompute a~table of any reasonable
+function whose arguments fit in two slots.
+
+\em{Top masks:} The array~$T_e$ will be represented by bit masks. For each
+of the possible tops~$t$ (i.e., the ancestors of the current vertex), we store
+a~single bit telling whether $t\in T_e$. Each bit mask fits in $\lceil\log n\rceil$
+bits and therefore in a~single machine word. If needed, it can be split to three slots.
+
+\em{Small and big lists:} The heaviest edge found so far for each top is stored
+by the algorithm in the array~$H_e$. Instead of keeping the real array,
+we store the labels of these edges in a~list encoded in a~bit string.
+Depending on the size of the list, we use two one of two possible encodings:
+\df{Small lists} are stored in a~vector which fits in a~single slot, with
+the unused entries filled by a~special constant, so that we can infer the
+length of the list.
+
+If the data do not fit in a~small list, we use a~\df{big list} instead, which
+is stored in $\O(\log\log n)$ words, each of them containing a~slot-sized
+vector. Unlike the small lists, we use the big lists as arrays. If a~top~$t$ of
+depth~$d$ is active, we keep the corresponding entry of~$H_e$ in the $d$-th
+entry of the big list. Otherwise, we keep that entry unused.
+
+We will want to perform all operations on small lists in constant time,
+but we can afford spending time $\O(\log\log n)$ on every big list. This
+is true because whenever we need a~big list, $\vert T_e\vert = \Omega(\log n/\log\log n)$,
+so we need $\log\vert T_e\vert = \Omega(\log\log n)$ comparisons anyway.
+
+\em{Pointers:} When we need to construct a~small list containing a~sub-list
+of a~big list, we do not have enough time to see the whole big list. To solve
+this problem, we will introduce \df{pointers} as another kind of edge identifiers.
+A~pointer is an~index to the nearest big list on the path from the small
+list containing the pointer to the root. As each big list has at most $\lceil\log n\rceil$
+entries, the pointer fits in~$\ell$ bits, but we need one extra bit to distinguish
+between normal labels and pointers.