Faster… faster…

I’ve just finished reading Chris Okasaki’s Purely Functional Data Structures, or rather I’ve just finished re-reading chunks of it in an attempt to grasp the material. I must say it was a really good read. As the back cover states, most books on data structures assume an imperative lanauage like C or C++. Some of those data structures do not translate well into languages like ML and Haskell. This book describes data structures from the point of view of functional languages, and presents design techniques so that programmers can develop their own functional data structures.

The first section of the book shows how some common data structures can be implemented in ML. The examples include leftist heaps, binomial heaps and red-black trees. For the latter data type, the re-balancing algorithm looks very tidy when expressed using pattern matching when compared to the more imperative implementations I have seen before.

 
The second part of the book looks at the relationship between lazy evaluation and amortization. This part starts with a quick introduction to streams (lazy lists) supporting a lazy cons operation with evaluation being forced by pattern matching. Later chapters use lazily evaluated data types to good effect.
Chapter five introduces the idea of using amortized cost instead of worst case case. Often when we use a data structure we are more worried about the costs of calling a particular operation multiple times rather than the worst case cost of a single call. For some of the common data structures, the worst case bounds are far worse than these amortized bounds. Two techniques are introduced for deriving amortized bounds – the banker’s method and the physicist’s method. Two data structures are then analysed – the standard functional queue and splay trees, a tree that gets slightly re-balanced on each operation.
 
We’ll look at the queue implementation as it develops during the course of the book. Here’s the simple version, implemented in F#, where we use two lists with the invariant that the first is empty only if the second is empty too.
 
let empty = ([],[])
 
let isEmpty (f,r) = f = []
let checkf queue =
  match queue with
  | ([],r) -> (List.rev r, [])
  | _ -> queue;;
 
let snoc el queue =
  match queue with
  | (f,r) ->  checkf (f, el::r)
 
let head queue =
  match queue with
  | ([], _) -> failwith "Empty"
  | (x::f,r) -> x
 
let tail queue =
  match queue with
  | ([], _) -> failwith "Empty"
  | (x::f,r) -> checkf (f,r)
 
Standard disclaimer: we should be hiding the implementation details behind a type. By not hiding it, we get a better idea of how the data structure is evolving.
 
We can exercise the implementation:
 
> snoc 1 empty;;
val it : int list * int list = ([1], [])
> snoc 2 it;;
val it : int list * int list = ([1], [2])
> snoc 3 it;;
val it : int list * int list = ([1], [3; 2])
> snoc 4 it;;
val it : int list * int list = ([1], [4; 3; 2])
> tail it;;
val it : int list * int list = ([2; 3; 4], [])
> snoc 5 it;;
val it : int list * int list = ([2; 3; 4], [5])
> head it;;
val it : int = 2
 
snoc and head run in O(1) worst-case  time and tail is O(n) worst-case. However, using the methods from the book, we can show that tail is O(1) amortized time, basically because the reverse of the rear list does work that can be allocated into subsequent calls of tail; if the reverse does a lot of work, then we make it cheap
for a lot of subsequent calls to tail, and this balances out over a large series of calls.
 
Chapter five ends with the observation that this accounting only works if the data structure is used ephemerally ie in a single threaded fashion. If we take the queue in state ([1], [4; 3; 2]) and call tail on this particular instance, then multiple calls to tail run in O(n^2). The trouble is that the expensive reverse operation is no longer balanced by a subsequent series of cheap calls to tail.
 
Chapter six introduces the "fix" for this. The fix is to introduce laziness into the language. For our example that means that we do not want to do too much work associated with the reverse until we know that the element is going to be needed, and once we have done the reverse we can share it across a number of persistent (non-ephemeral calls). The banker’s method and the physicist’s method need to be extended to deal with laziness to account for this delayed work.
 
We can reimplement the queue using a suspension for the delayed part of the computation. This suspension introduces highly controlled access to state within the functional language allowing multiple computations to refer to a value that is computed on demand and for which the result can be shared after it has been calculated only once.
 
We can reimplement the queue using a suspension for the delayed part of the computation.
type SuspensionCell<‘a> = Evaluated of ‘a | Unevaluated of (unit -> ‘a);;
 
type Suspension<‘a> = Empty | Cell of SuspensionCell<‘a list> ref;;
 
let delay x = Cell(ref(Unevaluated x));;
 
let force susp =
  match susp with
  | Empty -> []
  | Cell value ->
    match !value with
    | Evaluated x -> x
    | Unevaluated x ->
       let result = x()
       value := Evaluated result
       result;;
 
This gives us the call-by-need semantics… well almost. As we don’t do any locking around the evaluation and reassignment of the memoised value into the reference cell, in principle the computation could be executed multiple times.
We can try it and see that the computation is happening at most once.
 
> xxx;;
val it : Suspension<int> = Cell {contents = Unevaluated <fun:xxx@19>;}
> force xxx;;
val it : int list = [1; 2; 3]
> xxx;;
val it : Suspension<int> = Cell {contents = Evaluated [1; 2; 3];}
 
The queue can now be rewritten to use the lazy evaluation.
 
let empty () = ([], 0, Empty, 0, []);;
 
let isEmpty queue =
  let (_, lenf, _, _, _) = queue
  lenf = 0;;
 
let checkw queue =
  match queue with
  | ([], lenf, f, lenr, r) -> (force f, lenf, f, lenr, r)
  | _ -> queue
 
let check queue =
  let (w, lenf, f, lenr, r) = queue
  if lenr <= lenf
  then checkw queue
  else
    let f’ = force f
    checkw (f’, lenf+lenr, delay(fun () -> f’ @ List.rev r), 0, []);;
 
let snoc el queue =
  let (w, lenf, f, lenr, r) = queue
  check (w, lenf, f, lenr+1, el::r);;
 
let head queue =
  let (w, lenf, f, lenr, r) = queue
  if w = []
  then failwith "Empty"
  else List.hd w;;
 
let tail queue =
  let (w, lenf, f, lenr, r) = queue
  if w = []
  then failwith "Empty"
  else check (List.tl w, lenf-1, delay(fun () -> List.tl (force f)), lenr, r);;
 
We can exercise it.
 
> let el2 = snoc 1 (empty());;
val el2 : int list * int * Suspension<int> * int * int list
> snoc 2 el2;;
val it : int list * int * Suspension<int> * int * int list
= ([1], 1, Cell {contents = Evaluated [1];}, 1, [2])
> snoc 3 it;;
val it : int list * int * Suspension<int> * int * int list
= ([1], 3, Cell {contents = Unevaluated <fun:check@35>;}, 0, [])
> snoc 4 it;;
val it : int list * int * Suspension<int> * int * int list
= ([1], 3, Cell {contents = Unevaluated <fun:check@35>;}, 1, [4])
> tail it;;
val it : int list * int * Suspension<int> * int * int list
= ([2; 3], 2, Cell {contents = Evaluated [2; 3];}, 1, [4])
> tail it;;
val it : int list * int * Suspension<int> * int * int list
= ([3], 1, Cell {contents = Unevaluated <fun:tail@51>;}, 1, [4])
> tail it;;
val it : int list * int * Suspension<int> * int * int list
= ([4], 1, Cell {contents = Evaluated [4];}, 0, [])
> tail it;;
val it : int list * int * Suspension<int> * int * int list
= ([], 0, Cell {contents = Evaluated [];}, 0, [])
 
By doing this we have extended the bounds for ephemeral use to apply to the persistent case too.
The next chapter deals with scheduling. The idea is to get better worst case behaviour by scheduling the suspended computations so that we evaluate a few of them on every operation instead of forcing a large computation in certain circumstances. We can do this by using streams. F# gives us a stream type in the form of LazyList.
 
let xxx =
  LazyList.delayed
     (fun () -> printfn "called" ; LazyList.consf 10 (fun _ -> printfn "called2" ; LazyList.empty()));;
 
When we evaluate the above, neither of the messages is printed. When we first call the hd function, "called" is printed.
 
> LazyList.hd xxx;;
called
val it : int = 10
> LazyList.hd xxx;;
val it : int = 10
 
When we call the tail function, the second part of the list is evaluated to print the value.
 
> LazyList.tl xxx;;
called2
val it : int LazyList = seq []
> LazyList.tl xxx;;
val it : int LazyList = seq []
 
We can rewrite our ongoing queue example using a stream  for the head list, a standard list for the reverse list, and a third stream for computations that are being scheduled.

let empty() = (LazyList.empty(), [], LazyList.empty());;

 
let isEmpty queue =
  let (f,r,s) = queue
  not (LazyList.nonempty f);;
 
let rec rotate (queue1, list1, queue2) =
  if LazyList.nonempty queue1
  then
    let delayed2 = fun () -> LazyList.cons (List.hd list1) queue2
    let delayed = fun () -> LazyList.cons (LazyList.hd queue1) (rotate (LazyList.tl queue1, List.tl list1, LazyList.delayed delayed2))
    LazyList.delayed delayed
  else LazyList.delayed (fun () -> LazyList.cons (List.hd list1) queue2);;
 
let exec (f,r,schedule) =
  if LazyList.nonempty schedule
  then (f,r,LazyList.tl schedule)
  else
    let f’ = rotate (f,r,LazyList.empty())
    (f’, [], f’);;
 
let snoc el queue =
  let (f,r,s) = queue
  exec (f, el::r, s);;
 
let head (queue,r,s) =
  if LazyList.nonempty queue
  then LazyList.hd queue
  else failwith "Empty";;
 
let tail (queue,r,s) =
  if LazyList.nonempty queue
  then exec (LazyList.tl queue, r,s)
  else failwith "Empty";;
 
We could exercise it as follows. We need to be careful not to get LazyLists printed by the REPL (read-eval-print-loop) as this will force the computations.
 
> let q3 = snoc 5 (snoc 4 (tail (snoc 3 (snoc 2 (snoc 1 (empty()))))))
– let h1 = head q3
– let q4 = tail q3
– let h2 = head q4
– let q5 = tail q4
– let h3 = head q5
– let q6 = tail q5
– ;;
> (h1, h2, h3, q6);;
val it : int * int * int * (int LazyList * int list * int LazyList)
= (2, 3, 4, (seq [5], [], seq [5]))
 
This gives us a queue for which all operations are O(1) worst case time; these are known as real-time queues.
 
The third section of the book looks at some general techniques for designing functional data structures.
 
The first technique is lazy rebuilding. Here the data structure is rebalanced after a batch of operations instead of incrementally after each operation.
 
The second is numerical representations. By using redundant bases, we can implement types such as random access binary lists.
 
The third is data-structural bootstrapping. Types such as tries are covered here. One example of a trie is that of implementing a dictionary lookup by storing the values in the nodes of a tree with the edges being the unique subsequences. For example, we could store the keys "cat", "dog", "car" and "cart" with values v1, v2, v3 and v4 as the trie.
 
null —ca—> null —r—> v3 —t—> v4
                          —t—> v1
     —dog—> v2
 
The text notes that it some circumstances this type of data structure can be faster than a hash table, as not all elements of the string need to be processed during a lookup. For example, looking up the string "no" could cause a miss after looking at the initial "n".
 
The final method is implicit recursive slowdown.
 
I liked the book because it demonstrates that one can implement some efficient data structures in functional languages. I liked the way that call-by-need (call-by-name plus memoisation) allowed persistent versions of the data structures to be implemented efficiently without introducing uncontrolled update of state into the implementation. With the  call-by-need, the mutable state is hidden neatly under the covers, though updates to this state could potentially require locking to ensure that calculations were only carried out once.
 
Advertisements
This entry was posted in Books. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s