‘Tis the season for loads of Haskell

The Xmas break is a good time to catch up with things. At the functional lunch at work we’ve been working though the NICTA Haskell course exercises which take you through the implementation of many pieces of the Haskell libraries via a series of questions with associated tests. We’ve just done the parts on Functor, Applicative and Monad. The questions were all really interesting, but I felt I needed a better introduction to the material. Luckily I came across the online version of LearnYouAHAskell and the sections on Functors, Monads and more Monads which are very good introduction to the relationship between the various concepts. After reading generally, useful Monads such as the State Monad have numerous wiki pages devoted to them.

Of course in Haskell, Monads are not quite enough. When you write anything complicated, you end up needing to have multiple Monads (think IO and some other Monad such as the exception Monad), so it is also important to get a grip on Monad Transformers, which let you deal with a stack of Monads. There is a reasonable introduction to these on the Haskell wiki, and there is a list of the standard transformers on the typeclassopedia. Somehow the use of Monad Transformers still feels like magic, even when they are introduced one by one like in this paper.

Of course, you might ask why Monads actually matter and this blog post covers the four problems that Monads attempt to solve. Though I’m not sure I fully understood the Kleisli category.

Once you want to play with Haskell, the obvious tool to use is GHC. The really cool thing is that the source to GHC is available on GitHub, so you can look at the actual implementation of all of the Monads described in the previous articles. Linked form the Haskell wiki page are a few blog articles such as the 24 days of GHC extensions, which includes articles on the more experimental features such as  Rank N types and  Bang Patterns.

I very much enjoyed having more time to play with GHCi, the interactive REPL which sits on top of the GHC compiler. There were a few commands that I hadn’t noticed before, in particular the single stepping which allows you to set breakpoints that are triggered when a particular piece of code is evaluated (and with Haskell being a lazy language the point of evaluation is often not very clear). This would have made life a bit easier in the past. For example, in this blog post on the strictness of Monads, in the example the author uses trace to report when a thunk is evaluated. Using the stepper in GHCi, it is possible to get the debugger to show us this.

unravel

By setting breakpoints, we can see the laziness, and in particular that doSomethingElse is evaluated before doSomething without needing to change the code and use the trace printing.

somethingelsefirst

Moreover, the debugger lets you see how much of the various data values have been evaluated as the printing in the debugger (using :print) does not force the evaluation. This makes it a really good way to understand the evaluation to whnf. You can force the evaluation too using the :force command, though this obviously changes the behaviour of the program you are debugging.

lazy

All considered I learned a lot about Haskell over the holiday. There are many papers and book still to read but the language continues to bring a lot of novel concepts out into the open. And it is hard not to find it fascinating.

Advertisements
Posted in Uncategorized | Leave a comment

A dump has so many uses

I’d been meaning to write a blog post about Windows dump files for a while now. I’ve used them at work a number of times. For example, when I worked on the CLR profilers, a dump file was a natural way to debug issues when there were problems with the COM component that we would inject into the profiled process. With the appropriate symbols, which you ought to have saved away to an internal symbol server, this was a good way of getting information on the issue.

I’ve used dump files in other scenarios too. When our cloud application seemed to be taking too much CPU when in production use, it was easy to take a dump of the process – the dump stalls the process only for a short amount of time so it is often possible to take a dump when an application is in production use. Better still, utilities like procdump from Sysinternals allow you to take a dump of a managed process when interesting events happen, so we can take a dump when a first chance exception happens for example and capture the state of the process when the exception is thrown.

I’ve used Windbg to load and study such files in the past, though Visual Studio supports them pretty well these days. Moreover, Microsoft have recently released a managed .NET library, ClrMD, which allows you to programmatically open and process such files, and the source is now open

Anyway, I was reminded of all of this when I came across a recent talk by Sasha Goldshtein, where he summarises the uses of dump files and shows some examples of the types of automated processing you could do using ClrMD. The talk also covers the many issues that dump files help you to debug (though you need to be slightly careful as the dump file may also contain sensitive data from the user’s process).

The Visual Studio experience when you open a dump file is quite impressive. The normal debugging windows are all available, so you can see the threads and the locals of the various methods. One thing that has anyways impressed me is that the locals window manages to display the property values of the various objects.

evaluate

I’d read in the past that this is possible because the VS code contains an IL interpreter, and wanted to prove this. I took the source code shown about and modified the property getter to throw an exception. Then, attaching WinDbg to the Visual Studio process and breaking on exceptions, I managed to stop at the following stack trace.

stacktrace

So indeed, Visual Studio contains code to allow the programming environment to simulate the execution of a property getter. That is rather cool!

Since I haven’t blogged for a while, I’d also like to point out some interesting articles I’ve read over Xmas. On the C# front there’s a good article on when you should avoid using async/await and another on the difference between rethrow and rethrow ex;

I’ve also been watching a load of Virtual Machine Summer School  and Smalltalk videos, and never fail to be impressed with the work that Smalltalk did to get VM technology into the mainstream. There’s this interesting talk on the history of VMs and advice from someone who has implemented many Java runtimes (and if you like the idea of a managed OS then this talk on COSMOS is also very interesting), plus a load of talks on improving the VMs performance and the many contributions of Smalltalk.

Posted in Uncategorized | Leave a comment

At last you can prove your understanding of the C# specification

There was some discussion at work about the line of code:

IEnumerable<object> xx = new List<string>();

This line of code didn’t work in the initial versions of C#, even those with generics, because of the need for variance in the IEnumerable interface. [Variance allows the compiler to relate the instantiations of IEnumerable<A> and IEnumerable<B> when the types A and B have a subtype relationship]

Of course, when you’re discussing this kind of thing, it’s important that you can talk about the parts of the C# language specification that justify the steps the compiler is going to be taking. I believed that the conversion was because of an implicit reference conversion in the specification [6.1.6 Implicit Reference Conversion], but, of course, it’s really hard to be certain that this is the rule the compiler is going to use and that there isn’t some other rule which is actually being applied.

So what do we do?

I remembered reading how the Roslyn compiler had been written with an aim of keeping it very close to the C# specification, so that it was easier to verify that the implementation was correct. The Roslyn source is available here on GitHub and it’s fairly easy to build the compiler if you have Visual Studio 2015 Update 3.

You can then write a simple source file containing the above code, and set the csc project as the startup project with the debug command line set to point to this file. The various conversions that annotate the semantic tree are listed in a ConversionKind enumeration and it is fairly easy to find uses of the ImplicitReference enumeration member to see where the annotation is added to the tree. This gave me a way to breakpoint and then look at the call stack to determine where I should set a breakpoint and start stepping. [This isn’t always trivial because the call stack doesn’t really tell you how you got to a certain point, but rather tells you where you are going to go when certain method calls finish. These concepts are sometimes different (for example in the case of tail calls)]

For our example code, the key point is that we find the implicit reference conversion used in the ConversionsBase.cs file where we see a call to the method HasAnyBaseInterfaceConversion with derivedType List<string> and baseType IEnumerable<object>. When we walk across the interfaces of the derivedType argument by calling the method d.AllInterfacesWithDefinitionUseSiteDiagnostics, we’ll enumerate across the type IEnumerable<string> and the compiler will check that it is variance converable to IEnumerable<object> in the call to HasInterfaceVarianceConversion .

At this point the call stack looks like this:

ConversionsBase.HasAnyBaseInterfaceConversion
ConversionsBase.HasImplicitConversionToInterface
ConversionsBase.HasImplicitReferenceConversion
ConversionsBase.ClassifyStandardImplicitConversion
ConversionsBase.ClassifyImplicitBuiltInConversionSlow
ConversionsBase.ClassifyImplicitConversionFromExpression
ConversionsBase.ClassifyConversionFromExpression
Binder.GenerateConversionForAssignment
Binder.BindVariableDeclaration
Binder.BindVariableDeclaration
Binder.BindDeclarationStatementParts

What did I learn from this exercise?

There is now a C# implementation of the specification, so it is actually possible to check that you understand the parts of the specification that make some code valid. No longer do we guess what a C++ implementation of the compiler is doing, but we can animate the specification by stepping through the C# code. From the parts of the code that I have read, I’m not sure that I’d completely agree that the code follows the specification (making it easy to map from one to the other), but having an open source implementation does mean you can search for terms that you see in the specification to help you narrow down the search.

There are loads of other parts of the specification that I want to understand in more detail, so this is definitely an exercise that I am going to repeat in the future.

Posted in Computers and Internet | Leave a comment

Switching to Angular 2

Switching to Angular 2 by Minko Gechev

In the past I spent some time trying to get up to speed with Angular 1, and after some announcements at Xmas time about the progress being made on Angular 2, I decided it was time to see how things have changed between the major releases. I ordered this book, but had to wait until April before it was published. In the meantime Angular 2 has moved to being very close to full release – the recent ng-conf keynote explains this.

In short, the differences are very great… Angular 2 is very much aimed at being tooling friendly, giving it, as commented in a recent podcast I listened to, more of a Visual Basic RAD feel with proper components each with a lifecycle, inputs and outputs [though I know of no tools so far that support it]. Moreover there is support for server side rendering of the initial page to avoid the delays as the page boots [so called Angular Universal], and also the possibility of the page using web workers for doing the computation.

Chapter 1 of the book is really good discussion of the current state of the web, and the lessons learned from experience with Angular 1, all of which have lead to the modified design of Angular 2. Some concepts from Angular 1, like the notion of scope have been removed, some important concepts like dependency injection have kept but made easier to use, and the framework has been redesigned to make it easier to support server-side rendering.

Chapter 2 takes us through the basic concepts of Angular 2. There’s a big emphasis on building user interfaces via composition. Indeed in Angular 2, we have Directives which have no view and Components which do. Services and Dependency Injection still play a role, but features such as change detection are much simpler and more user optimisable – detection can be turned off, customised in various ways, and the framework also knows about various persistent [functional] data types which make change detection much quicker. The whole digest cycle of Angular 1 is gone – zones, which are explained here, can be used to capture state transitions that may change the state of the user interface. Templates remain for Components to express their views, though filters have been replaced by a richer notion of Pipes.

Angular 2 aims to be a lot more declarative [think tool support again]. Though everything can be transpiled down to more primitive versions of JavaScript, there is an emphasis in the book on using TypeScript which already supports some proposed extensions to JavaScript such as decorators. Chapter 3 of the book takes us through the TypeScript language, with its classes, interfaces, lambda expressions and generics.

Chapter 4, Getting Started with Angular 2 Components and Directives, digs into the building blocks of your Angular applications. We start with a basic hello world application, and the author explains how to get it running using code from his GitHub repository. We then move on to a ToDo application. This application emphasises breaking down the application into components that are connected together. For example, the input box for adding a ToDo, raises an event that is handled by another component in the GUI. The chapter covers projection and the ViewChildren and the ContentChildren decorators. This chapter also takes us through the lifecycle of the component, describing the eight basic lifecycle events that the component can optionally handle, and then the tree style change detection that we now get from the framework – no more multiple change detection passes bounded with a maximal number of passes.

Chapter 5 goes through dependency injection in Angular 2. We now use an injector [which can have a parent and children] to control the scope of injectable classes, and we use decorators to declare the injected items. Again, this decorator syntax can be expressed using a lower level internal DSL if you do not want to use these higher level facilities.

Chapter 6 looks at Angular Forms, which allow you to write the type of GUI you’ll need for the CRUD parts of your application in a fairly declarative manner, explains the validator framework and how you can write your own validators. The second part of the chapter looks at routing.

Chapter 7 explains pipes and how you communicate with stateful services. In this part we have a quick look at async pipes and observables.

The last chapter looks at server side rendering and the developer experience. There are now command line tools for quickly starting a new Angular project.

I though the book was a really good read. It seemed to cover the concepts quite concisely, and the examples made the concepts clear and understandable. It emphasised modern JavaScript and showed the flaws of the Angular 1 design. Now I need to put this all into practice writing some large application.

Posted in Uncategorized | Leave a comment

Vagrant, up and running

I’d obviously heard of Vagrant a long time ago, but only used it in anger for the first time a few weeks ago [when playing with MirageOS]. I decided that I need to understand a little more about how it works, so I bought the book Vagrant: Up and Running by Michael Hashimoto.

The book is fairly short at only 138 page, and is really a guide to the various commands that Vagrant offers, together with a some example use cases. The introductory chapter discusses the need for Vagrant and desktop virtualisation. Chapter two walks us through the creation of a local Linux instance using Vagrant’s init and up commands. Chapter three looks at provisioning, and the example here is generating an image with Apache serving a web site. Chapter four looks at networking, extending the example by having the web server talk to a database. Chapter five looks at multi-machine clusters, showing how easy it is to provisions a group of machines that emulate a production deployment. Chapter six talks about the concept of boxes.

Chapter seven of the book talks about extending Vagrant using plugins. This is the section of the book that I was looking forward to. The previous chapters covered the kind of things that you could do with Vagrant, and I was interested in how Vagrant actually does its stuff, but sadly this chapter doesn’t really go into quite enough detail, rather concentrating on how you’d add new features to Vagrant rather than explaining the implementation.

Fortunately the source of Vagrant is available on GitHub, and it is a mass of Ruby code that is fairly easy to read. The default provider for Vagrant is the one for VirtualBox and the code for this provisioner can be found here. It turns out that the various Vagrant commands end up using the vboxmanage command line executable that is installed as part of the VirtualBox installation. Hence Vagrant is really an abstraction across a broad set of providers, allowing you to use them without having to worry about their specifics – a very clever and useful idea.

The book is a good, informative quick read and gives you an idea of what Vagrant can do for you. You can then dig into the implementation and details after reading it.

Posted in Uncategorized | Leave a comment

Just how serious is a bug

I was thinking the other day about how hard it is to evaluate the impact of a bug fix. You have a bug report and determine the fix for it – just how do you then weigh the impact of the bug fix against the instability that this might cause if you release it? And I think that this is a very hard call.

I came up against this problem nearly a year ago. Microsoft were just about to release the 4.6 version of the .NET framework, and we were lucky enough to get a beta version to try. Several of us installed this beta onto our development machines, and continued working as normal. One of the testers in the team noticed that a PDF rendering component that we were using, and had been using for years, was no longer laying out the graph but was putting various elements in the wrong position. Mysteriously, it seemed to work on other people’s machines, and also seemed to work when we ran the application inside Visual Studio, but not when we ran it inside Windbg. We also didn’t see any failures if we built as x86, and so we spent a while checking whether previous builds of the product had accidentally been x86. It was only when I was doing the washing up that night, that I twigged that this is exactly what happens if you have a JIT bug. Running inside VS is going to turn off some of JIT optimisations, whereas running inside Windbg is going to leave these optimisations turned on.

The next morning I went in to work and verified this by setting the application config to use the legacy JIT, and sure enough the bug didn’t happen. It was then a case of gathering more data and isolating the method that was giving the wrong result. This turned out to be a point where the JIT made an optimised tail call. I therefore reported this on Connect. As is usual, I was then asked for a self contained reproduction, which I supplied the next day. Time passed and then the issue was marked as fixed, with the fix being noted as available in a later release of 4.6

Around a month later .NET 4.6 was released to the world. And this blog post came out. A x64 tail call was affecting ASP.NET code, and it turned out that this was the same issue that I had reported.

The question is: how do you gauge the impact of a bug like this and decide whether the fix goes out straight away, or whether you test it a lot more and release it then? In the advisory, Microsoft said that they had run lots of in house code and hadn’t found a manifestation of the issue. However, that might not have been quite good enough, as people can take such a bug and try to convert it into an exploit – see the comment on the thread in the CoreClr issue where the return address can be obtained – so if you get unlucky and there’s a widely available framework that allows an exploit to work, you probably do need to push out a fix. There is also the issue that the .NET framework is often used to run code written in C#, which probably doesn’t have a lot of tail call optimisable method invocations, but it can also be used as runtime for languages like F# where the design patterns lead to lots more tail calls happening [as you recur down algebraic data types].

Pondering the issue, it seems to me that it is really hard to get enough data to inform your decision. What’s the ratio between CLR runs of F# programs compared to C# programs? What’s the percentage of C# programs that would hit this issue? And F# programs? What’s the chance of the bad compilation being turned into an exploit?

In the end I suspect it just comes down to a call by a product manager who guesses the severity, and then uses feedback to determine if the call was right – obviously the more “important” the sender of the feedback, the more weigh they are given. And that’s a shame.

Posted in Uncategorized | Leave a comment

What do I need to know about Bitcoin?

There have been several bitcoin discussions at work of late… what actually is Bitcoin? How can the underlying blockchain technology be used for other purposes? I did the Coursera course on the topic some time ago, but these questions pushed me to go a little deeper into the world of BitCoin. I’d thought I’d try to summarise the articles I found useful here.

First, the book associated with the Coursera course has just been made available. I thought this book was a great read. It covers the basic cryptography that you need to understand, and gives some example crypto currencies which don’t quite work, before moving on to explain how the original bitcoin design fixed these issues. The course is very broad, with lectures discussing the motivation of the miners, who need to devote large amounts of computing power [and hence resources] to the network, in the hope of being the chosen node who generates the next accepted block of the transaction chain, and who can therefore gain the reward for the block including the associated transaction fees.

The course also covers the tricks of representation which can be used to allow the block chain to represent the ownership of various assets. I’d overlooked this aspect of the system when I first did the course, but the ability to associate bits of extra data with a transaction has led to extra uses for the block chain. For a small transaction fee that is paid to the miners, a user can record information, say a cryptographic hash of a document, into the block chain, and the permanent ledger property of the block chain can be used to prove that the document was published at a time in the past.

For an interesting idea of the history, I next read Nathaniel Popper’s Digital Gold: BitCoin and the Inside Story of the Misfits and the Millionaires Trying To Reinvent Money. This book covers the story, from the initial idea and paper published on a mailing list by Satoshi Nakamoto to the set of programmers who picked up on it and used their computers for the initial mining efforts. Eventually others invested in the idea, offering real world items in exchange for the virtual currency, and after some time there were investments in activities such as brokering firms who took the risk of converting  real currencies into virtual bitcoins. The book covers the history of Silk Road, the illegal drugs site,  which uses the anonymous nature of bitcoin to allow the trade of items such as drugs. Popper’s book is interesting – it narrates the history in detail [which gets a little tiresome at times] and also tries to explain how a virtual thing can actually have a real world value. The initial miners’ rewards for their mining efforts, including the blocks mined by the creator of bitcoin, are now worth considerable amounts of real world currency.

Obviously you need to know that your transactions are safe and there are loads of papers out there that analyse the safety of the currency. I enjoyed this Msc thesis which used Google App Engine to do various experiments. For an idea of the representational tricks that mean you can use the block chain for recording ownership of things such as cars, OpenAssets is one encoding you could take a look at.

Of course, you probably want to have a play to understand all of this. I started out trying to find where I could get some bitcoins. There are various faucets that occasionally give away free bitcoins, and many online bitcoin wallet management services, but I wasn’t sure if I really wanted to sign up. Fortunately there is a test version of the bitcoin block chain that is used for testing the various clients –  so you have the ability to do the transactions on a set of bitcoins that you can get for free from the Testnet faucet, though the coins have no actual value.

You’re also going to need a library to program against bitcoin, and  for this I selected NBitcoin, whose author has a free book on how to use it.  It is available as a NuGet package so you can easily use it from Visual Studio.

First we need to get a private/public key pair as an identity, from which we can get an address that we can type into the testnet faucet to get some bitcoins. We generate a new key and serialize it to a file for use later. We can then read the key back and generate a bitcoin address. We can type that into the faucet, and we’ll be notified of the pending transfer.

There are lots of websites that let you look up data from the block chain. I can use one to look at the coins owned by my key and look at the details of the transaction that populated it. We can see the process of block chain at work here. The transaction is initially unconfirmed as it makes its way across the peer to peer network until a miner picks it up and it becomes part of the block chain, though you need to wait until the block containing your transaction is several levels from the chain’s head before you can be confident that it will remain [typically 6 levels with one level every 10 minutes on the real block chain]

The NBitcoin library is powerful. It contains a lot of utilities to work with various block chain information sources and contains extensions for dealing with asset management via coloured coins. I quickly tested it out using some C# to transfer some bitcoins associated with key I had generated to another key. With bitcoin, one takes an output of a previous transaction, so I needed to fetch the transaction using the transaction id of the transfer from the blockr block chain information site. I split the output of the transaction that gave the money to me into three outputs: the majority gets transferred back to my account, a second small amount goes to the second account, and I add a little information to the transaction that becomes part of the block chain ledger and could be used to record asset transfer. Any remainng bitcoin is used as a fee for a miner, encouraging one of the miners to include the transaction in the block chain. We can see the results on one of the chain exploration sites.

In the C# code I needed to access the peer network to get my transaction to the miners. You can get a bitcoin client from this site. Running the bitcoin client for a while,

bitcoind -testnet

generated a list of peers in the peers.dat file which I could then use with the NBitcoin library to push my transaction out to the world. Alternatively the library can use a running local node, but I didn’t want to leave the node running so instead decided to use the peer data file. There’s lots of documentation here to discover how you can use the other utilities included in the download.

The block chain idea is fascinating – a distributed ledger with an immutable history, and there are many people trying to find uses for it. One example is the notion of side chains, which manage content but include a pointer to this data from the block chain by using a hashed signature which is included in the real block chain. There’s loads more experimenting to do, and I’m sure there are many interesting discoveries to come.

 

Posted in Uncategorized | Leave a comment