I’m really enjoying the Principles of Reactive Programming course which is currently running on Coursera. We are currently at week three, and have had some interesting lectures on monads, and how they can be used to model effects such as latency. This theory was used as a way of introducing the Future and Promise constructs in Scala which we have used in the implementation of a scalable web server.
The programming language of the course is Scala, a language that I hadn’t used before. It has been fairly straightforward to get to grips with the language while programming the examples, though I must admit to having had some troubles with debugging using Eclipse.
Scala seems to have a massive range of language features. As well as being a straight forward functional/object hybrid (think F#), it has interesting notions such as objects being callable simply by having an apply method defined on them, a type representing a PartialFunction and traits – interfaces that can contain implementation, and which can be multiply inherited by concrete classes. I think the style of Mixin programming that this encourages is very powerful.
I am a massive fan of Common Lisp macros, and it is interesting to see these in Scala too. Macros are used to implement all sorts of library features including async, a feature somewhat like the C# async facility. It allows you to create of a future together with a means to await the completion of other futures, with the key point being that the code remains linear and it is the compiler that is doing the lifting of code blocks into completion callbacks. In Clojure, macros have been used to implement this kind of functionality too modelled on Go blocks. Scala’s implementation is a little different in semantics from the C# feature of the same name which is based on the Task<> type. In C# the code runs synchronously until the first wait; in Scala the async block instantly creates a future which spawns work elsewhere.
Scala also offers call-by-name semantics for function parameters, and a type of language level dependency injection via implicits. Sometimes, the number of language features feels a little overwhelming.
Macros are a great feature for implementing new language features. In a call-by-value language without call-by-need parameters, they are often used to implement language constructs such as IF where we want to avoid implementing some code depending on another parameter without the need to wrap it into a construct like a lambda expression to delay the execution. Using code walking, the entire function definition can be expanded and analysed (as in the implementation of async in Clojure where the code is translated into SSA form and essentially transformed into code with a different meaning). There’s a good introduction to code walking here.
Code walking (macro expansion) is also vital if you want to do type inference in an untyped language such as Clojure or Common Lisp. This paper discusses some of the work that went into Racket Scheme for doing this. This formed the basis of Typed Clojure, which is also covered in the author’s thesis. When I worked on a Lisp compiler in my first job after university, I spent a little of my spare time trying to implement type inference for Common Lisp based on the work of Henry Baker on the Nimble type inferencer. This style of inference lacks the elegance of the type checking in a language like ML, where the compiler does virtually all of the type inference you’ll ever need using Algorithm W or in more recent times constraint solving, on a language that constrains the types to make this possible.
I’m not quite sure how I feel about gradual typing and would like to try it on some larger examples. In a a language like ML, once the code type checks, it seems to work. In a language like C#, the type checking sometimes gets in the way of prototyping and yet even correctly typed code fails at runtime. In a language like Common Lisp, you can prototype very quickly, and pick up the type problems in the unit tests. I also think you can be using type checking for correctness and for improving the speed of the compiled code, and I’ve always relied on type checking for the latter in the code I’ve written.
The developer for core.typed is interviewed in this podcast.