Recently, Chazelle \cite{chazelle:ackermann} and Pettie \cite{pettie:ackermann}
have presented algorithms for the MST with worst-case time complexity
-$\O(m\timesalpha(m,n))$. We will devote this chapter to their results
+$\O(m\timesalpha(m,n))$ on the Pointer machine. We will devote this chapter to their results
and especially to another algorithm by Pettie and Ramachandran \cite{pettie:optimal},
which is provably optimal.
We will show that if we charge $\O(r)$ time against every element inserted, it is enough
to cover the cost of all other operations.
-\FIXME{Pointer machine and yardsticks}
+All heap operations use only pointer operations, so it will be easy to derive the time
+bound in the Pointer machine model. The notable exception is however that the procedures
+often refer to the ranks, which are integers on the order of $\log n$, so they cannot
+fit in a~single memory cell. For the time being, we will assume that the ranks can
+be manipulated in constant time, postponing the proof for later.
We take a~look at the melds first.
\>The total cost of all steps in the upper part is therefore $\O(n)$.
\qed
-It remains to examine the rest of the \<DeleteMin> operation.
+We now proceed with examining the \<DeleteMin> operation.
\lemma\id{shdelmin}%
Every \<DeleteMin> takes $\O(1)$ time amortized.
this was the only place where we needed the invariant.)
\qed
+It remains to take care of the calculation with ranks:
+
+\lemma\id{shyards}%
+Every manipulation with ranks performed by the soft heap operations can be
+implemented on the Pointer machine in constant amortized time.
+
+\proof
+We create a~``yardstick'' --- a~double linked list whose elements represent the possible
+values of a~rank. Every vertex of a~queue will store its rank as a~pointer to
+the corresponding ``tick'' of the yardstick. We will extend the list as necessary.
+
+Comparison of two ranks for equality is then trivial, as is incrementing or decrementing
+the rank by~1. Testing whether a~rank is odd can be handled by storing an~odd/even
+flag in every tick. This covers all uses of ranks except for the comparisons for inequality
+when melding. In step~1 of \<Meld>, we just mark the ticks of the two ranks and walk
+the yardstick from the beginning until we come across a~mark. Thus we compare the ranks
+in time proportional to the smaller of them, which is the real cost of the meld anyway.
+The comparisons in steps 5 and~6 are trickier, but since the ranks of the elements put
+to~$P$ are strictly increasing, we can start walking the list at the rank of the previous
+element in~$P$. The cost is then the difference between the current and the previous rank
+and their sum telescopes, again to the real cost of the meld.
+\qed
+
Now we can put the bits together and laurel our effort with the following theorem:
\thmn{Performance of soft heaps, Chazelle \cite{chazelle:softheap}}
\proof
We set the parameter~$r$ to~$2+2\lceil\log (1/\varepsilon)\rceil$. The rest follows
from the analysis above. By Lemma \ref{shcorrlemma}, there are always at most $n/2^{r-2}
-\le \varepsilon n$ corrupted items in the heap. By Lemma \ref{shmeld}--\ref{shdelmin},
+\le \varepsilon n$ corrupted items in the heap. By Lemma \ref{shmeld}--\ref{shyards},
the time spent on all operations in the sequence can be paid for by charging $\O(r)$ time
against each \<Insert>, which yields the time bound.
\qed
+\FIXME{Christen lemmata.}
+
\FIXME{Remark on optimality.}
-\FIXME{Example of use: pivots.}