A while ago, I wrote a
of posts that claimed to talk about finger trees. Unfortunately, I really
botched it. I’d read a bunch of data structure papers, and managed to get
myself thoroughly scrambled. What I wrote about was distantly related
to finger trees, and it was useful to help understand how fingertrees work -
but it was not, in any way, shape, or form, actually a description of
fingertrees. Since then, I’ve been meaning to write a proper post explaining
finger trees – but with the work on my book, and with chaos at work, I just
haven’t had the time. This time, in order to do my damnedest to make sure that
I don’t screw it up again, I’m basically go to describe finger trees over a
couple of posts by walking through the best finger-tree paper that I could
find. The paper is href="http://www.soi.city.ac.uk/~ross/papers/FingerTree.html">“Finger Trees: a
simple general-purpose data structure”, by Ralf Hinze and Ross Patterson.
This might by the paper that introduced the structure, but I’m not
The point of finger trees is pretty simple. It’s very similar to the point
of zippers. Programming in functional languages is terrific. As I’ve described
before, there are a lot of advantages to writing functional code. But there are also
a lot of places where a naive implementation of an algorithm using a functional
data structure is dreadfully inefficient. Functional code may be prettier, more maintainable,
and more reusable – but imperative code is frequently much more efficient. When you’re
doing an operation that, conceptually, modifies a piece of a complex data structure,
then functional code can really suck. Finger trees give you a way around that – for many common updatabale data structures, you can build finger-tree versions that are very close to or fully as good as imperative, updating structures.
For example, imagine some numerical code that operates on an array. If you
use a naive array, and you update one element of the array, then in a
functional program, you need to copy the entire array to create a new
array with the modified element. Of course, in a real program, you probably
wouldn’t do anything quite that naive – you can usually find some way of
representing things to reduce the cost (or you can use a compiler that can do
copy-to-update transformations when it’s possible). But you are, at least
conceptually, stuck making copies of something in order to do a
functional update. So what you want to do is to minimize that cost.
The simplest example of that is something like a gap buffer. To modify something at the
gap, you only need to copy a couple of pointers, and you can re-use the vast majority
of your original structure. Similarly, in a zipper, you can modify something at the focal
point of the zipper very quickly, and you can move the focus reasonably quickly. What
a finger tree does is generalize the concept of zipper, so that you end up with
a structure that’s sort of like a tree with multiple zippers. It’s almost like taking
a zipper – which splits a tree into (left, path-to-current, right) – and then taking
the left and right subtrees, and turning them into zippers; and taking their left and right
subtrees, and turning them into zippers. You end up with a structure where you’ve
got multiple pointers into the depths of the tree – and you can modify the tree just by modifying
the zipper-like structure closest to the value you want to update.
To explain a finger tree in detail is pretty complicated. The structure itself isn’t
that bad – but to understand why it has the performance characteristics that
it does is rather tricky. It relies on the fact that certain operations are done lazily -
and that the lazy evaluation guarantees that no more than a certain amount of time will
need to be spent on any structure change.
But we’ll come back to that later. Let’s start by looking at a simple structure,
which we’ll gradually turn into a finger tree. To start off with, we want to
represent a sequence. We can do that with a 2/3 tree like the following Haskell code.
data Tree t = Zero t | Succ (Tree (Node t)) data Node s = Node2 s s | Node3 s s s
Just this initial code is a bit tricky, so I’ll explain it in a bit of detail.
It’s creating a 2-3 tree that has a semi-numerical structure to it. Each level
of the tree is basically a distinct type. In this tree, a node is an internal
non-leaf node of the tree. Only leafs contain values; internal nodes are just for
The trick to understanding this is in the definition of
Tree a. What this does is effectively create
a distinct type for each tree level. A leaf of a tree of ts is a value of
type t, wrapped by the
Zero constructor. So a leaf node of a tree of
integers could be
Zero 23. A level one node of a
is a value of type
Tree (Node t) wrapped by the
Succ constructor — so,
for example, it could be
Succ( Zero (Node2 23 34))
. A level two
node is a value of type
Tree (Node (Tree (Node tt)) – like
Succ((Zero(Node2 (Succ (Zero (Node2 23 32))) (Succ (Zero (Node2 43 45)))))).
Just this much is interesting: the type itself is expressing one of the
structural constraints of the tree! 2-3 trees are always balanced – the length of
the path from root to leaf is always the same for all nodes. This type declaration
means that it’s a type error to have a level 1 tree be a child of a level
And we haven’t even gotten to the finger tree yet.
To get to a finger tree, we need to know what a finger is. A
finger is a structure that provides a reference to a node deep in a tree. In a
zipper, the zipper-path is a finger: it’s a trail down the tree that gives you
fast access to a particular node of the tree, and which you can then use for
fast updates or traversals of the nodes around the finger. A finger tree is a
tree structure of fingers superimposed on a conventional balanced tree. In
general, the internal nodes of the tree are annotated with monoids (a la my
botched original article) in order to help you find the part of the tree that
Before we get to anything like a type declaration, let’s look at a
picture. Here’s a 2-3 tree of integers.
To make it a finger tree, what you do, in theory, is reach down to the
leftmost and and rightmost internal nodes of the tree – in our case, the
parents of (1,2) and (8, 9, 10). Then you pick up the tree by those two
nodes, and let the rest dangle down. Like this:
If you look at this, you’ve got a finger for the node (1,2), and a finger
for the node (8,9,10). In between them, you’ve got a bunch of stuff dangling.
But if you look at the dangling stuff: it’s a finger tree. It’s got a finger
for its leftmost internal node, and a finger for its rightmost internal node,
and in between, it’s got some stuff dangling. That stuff is, in this
case, an empty finger tree.
That’s basically the structure of a finger tree: you’ve got fingers for
the head and tail parts of the tree; and you’ve got a finger tree in between
When you make the tree a bit bigger, you can see some more of the basic
properties of the tree. Take this tree:
And dangle it:
The finger on the outermost tree contains only leaf
nodes. The finger on the next inner tree contains level one nodes. And the
next one contains level 2 nodes. And so on. You’ve got fingers for the first
and last nodes at each depth level of the tree. the tree.
We’ll see a bit more about that as we look at how to build some structures
with a finger tree. We’ll start by looking at a generic finger tree. It’s
roughly what you would expect from the description above; a finger tree is
either a single node; a empty node (like the root in the dangled tree
above); or a left finger, a right finger, and a finger tree in between. It’s built
on the 2-3 tree, so we’ll reuse the
Node type from that.
data FingerTree a = Empty | Single a | Deep (Digit a) (FingerTree (Node a)) (Digit a) data Digit a = [ a ]
Digit type is new. I’d prefer to call it a finger, but
Hinze and Paterson use
Digit is basically a buffer
that contains the members of the node of the 2/3 tree that was replaced by the digit.
The reason that we make that switch – from a node using a constructor with fixed numbers
of children is because we’re going to cheat. You’ll see the details later.
An important thing to notice here: this uses the same node type trick as
the two-three tree: each time you shift to a deeper level of finger tree,
the type of the arguments to
FingerTree enforce the
idea that the fingers on each level point to higher-order nodes. At the top of
the finger tree, we’ve got a digit of
as; at the next level, we’ve got digits
Node as; climbing down the central spine, the third level
has digits of
Node2 as; in general, at level N+1,
the digits contain
We’ve already got enough to start seeing a bit of what makes the tree interesting. Suppose
we want to find the value at position x. Even though the number of nodes per branch is
variable, we still know that the value as position x will be either in, or very close to,
the nodes at level lg(x).
To actually write the implementation, we need a bunch of general reduction operations. In
Haskell, the whole idea of reduction is really deeply engraved in the language – so there’s
a type-class for defining reduction. So what we do is just provide the instance
definition of reductions that we’ll need for the tree.
We’ll start with nodes:
instance Reduce Node where reducer (-<) (Node2 a b) z = a -< (b -< z) reducer (-<) (Node3 a b c) z = a -< (b -< (c -< z)) reducer (>-) (Node2 b a) = (z >- b) >- a reducer (>-) (Node3 c b a) = ((z >- c) >- b) >- a
Reduction on trees is kind of strange until you wrap your head around it.
It uses a technique called lifting. Basically, reduction on
a tree is based on reduction on a node – the operation of reducing the tree is,
in fact, doing a reduce on the tree using reduce on a node as the reduction operation:
instance Reduce FingerTree where reducer (-<) Empty zero = zero reducer (-<) (Single x) zero = x -< zero reducer (-<) Deep left mid right zero = left -<' (mid -<'' (right -<' zero)) where (-<') = reducer (-<) (-<'') = reducer (reducer (-<)) reducel (>-) zero Empty = zero reducel (>-) zero (Single x) = zero >- x reducel (>-) zero (Deep left mid right) = ((zero >-' left) >-'' mid) >-' right where (>-') = reducel (>-) (>-'') = reducel (reducel (>-))
It’s a bit subtle – but it does really make sense. A finger tree is a
collection of nodes on the left, a tree of nodes in the middle, and a
collection of nodes on the right. So to reduce a finger tree, you need to
reduce the left-hand nodes to a list of values, and then reduce those values -
in other words, do a reduction of the list of nodes using the reduction of
nodes as the operator. And so on. Trace it through, and it makes a lot of
sense. (But I would probably never have come up with it myself.)
Now, let’s get a look at an example of something you can do with a finger
tree that shows it off. We’re going to build a double-ended queue. Building a
single-ended queue is easy; it’s no less efficient in a functional language
than in an imperative one. But a double-ended queue is a problem. Traditional
implementations would end up with roughly O(lg n) time. With a finger tree, we
can get amortized time of O(1) – the same performance as imperative. That’s
So how does it work? Well, for starters, we need to munge the structure a
bit. Theoretically, our finger-tree is based on a 2-3 tree. But if we always
perfectly maintained the two-three tree structure, we wouldn’t be able to get
the performance we want. That’s why our definition of
Digit is a
list of values: sometimes we’ll temporarily have either one or four in a
digit. We’ll always eventually restore the proper 2-3 tree, but sometimes
we’ll defer it for a while.
Let’s consider the push operations. In a deque, there are two different push operations;
one to push at the front, and one at the back. In the H&P paper, they define these
using some pretty symbols; but it’s much easier for me to type text than symbols in
HTML, so I’m going to name them push_front and push_back:
push_front :: a -> FingerTree a -> FingerTree a push_front a Empty = Single a push_front a (Single b) = Deep [a] Empty [b] push_front a (Deep [b, c, d, e] mid right) = Deep [a, b] (push_front (Node3 c d e) mid) right push_front (Deep left mid right) a = Deep ([a] ++ left) mid right
So far, this is pretty simple:
- Push onto an empty queue-tree, and you get a tree with
a single value.
- Push onto a tree with one value, and you get a finger tree where the left finger is the
first value, the right finger is the second value, and in between them is an empty tree.
- Push into a non-empty finger tree with a four-value left digit, and you fix the
two-three structure; you take one value from the four in the left digit, and join it
with the new entry; take the remaining three, and turn them into a node; then
insert the new node into the finger tree in the center.
To do a push at the end, you just take the mirror image of the code above:
push_back :: FingerTree a -> a -> FingerTree a push_back Empty a = Single a push_back (Single b) a = Deep [b] Empty [a] push_back (Deep left mid [e, d, c, b]) a = Deep left (push_back mid (Node3 e d c)) [b,a] push_back left mid right = Deep left mid (right ++ [a])
This is really beautiful. We’re working with this remarkably clever
structure – and yet an insert operation isn’t any more complicated than the insert
on a simpler structure.
So we can push things into trees. We need to be able to create deques from
lists of values; and we need to be able to remove values from the trees. To build those
operations, we need a bit of infrastructure. In particular, we need lifted versions of
the push operations.
push_front' :: (Reduce f) => f a -> FingerTree a -> FingerTree a push_front' = reducer push_front push_back' :: (Reduce f) => FingerTree a -> f a -> FingerTree a push_back' = reducel push_back
With these lifted operators, we can create a finger tree from any reducable collection.
Basically, you can create a finger tree based on a reduction operator by just lifting that operator
toTree :: (Reduce f) => f a -> FingerTree a
toTree s = push_front’ s Empty
Now we're getting to the point where things start to get tricky. This
isn't the way that I would have written it - but they provide a way of
defining a view of the fingertree as a list. That lets
you take the fingertree, and call a function which can lazily
convert your tree into a traditional cons-list:
data LeftView s a = LeftNil | LeftCons a (s a) leftView :: FingerTree a -> LeftView FingerTree a leftView Empty = LeftNil leftView (Single x) = LeftCons x Empty leftView (Deep left mid right) = LeftCons (head left) (leftDeep (tail prefix) mid right)
Now, you'll notice that in the last, recursive case of
leftView, we call
leftDeep". Normally, in code like this, that would be the "
constructor of the fingertree. But sometimes, there will be only one element in the left
digit; so when we use
leftView, it will invoke the constructor with the tail of
the left digit, which is empty. That's not valid in the
of the finger tree. But instead of making the cases of
leftView more complicated,
we provide a pseudo-constructor:/p>
leftDeep :: [a] -> FingerTree (Node a) -> Digit a -> FingerTree a leftDeep  mid right = case leftView mid of LeftNil -> toTree right LeftCons a mid' = Deep (toList a) mid' right leftDeep left mid right = Deep left mid right
You can create a
rightView by mirroring
leftDeep exactly the way we mirrored
These view functions are the heart of how we do pops from the deque. They do all of the work:
- To check if the deque is empty, compare its
- To retrieve the front element, take its
leftViewand extract the first parameter
- To get the deque with the first element removed, take its
leftView, and extract
the second parameter of its
(And, of course, mirror all of the above for the right-end of the deque.)
To make things simpler, you can provide functions for all of this:
left_peek :: FingerTree a -> Maybe a left_peek tree = case (leftView tree) of LeftNil -> Nothing LeftCons h t -> Just h left_pop :: FingerTree a -> FingerTree a left_pop tree = case (leftView tree) of LeftNil -> LeftNil LeftCons h t -> t
In terms of performance, there is a bit of subtlety to this. First and most importantly, you
need to remember laziness. When you call
leftView, nothing is
actually calculated until you extract and use the values. So while when you look at the code,
you see the full construction of the remainder deque whenever you peek at the head or tail
of the deque, it doesn't happen until you use it. And even then, it doesn't do the
whole thing: it does just enough to give you the values that you need.
To get the O(1) bound, you need to do an amortized analysis. I'm not going to go into depth
about it - check the paper if you're interested. But most of the time, there's a non-empty list in the
finger. In those cases, it's obviously O(1). Less frequently, you need to do a bit of
shuffling between the finger and the middle tree. Even less frequently - less frequently by
a factor of O(lg N), while doing the shuffling on the first inner tree, you'll need to do
some shuffling on the next inner tree. And so on. Each level of shuffling becomes less likely
by a factor of O(lg N). That O(lg n) factor works out as an implication of the structure of the tree:
the fingers on each inner tree get bigger, making it less likely that you'll need to push further
into the tree.
Anyway - this is getting long and complicated, so I'm going to end this post here. Next, we'll
look at more structures that you can build using finger trees as a foundation. Just from this much, the
finger tree is incredibly cool - but it gets better.