Switching to Angular 2

Switching to Angular 2 by Minko Gechev

In the past I spent some time trying to get up to speed with Angular 1, and after some announcements at Xmas time about the progress being made on Angular 2, I decided it was time to see how things have changed between the major releases. I ordered this book, but had to wait until April before it was published. In the meantime Angular 2 has moved to being very close to full release – the recent ng-conf keynote explains this.

In short, the differences are very great… Angular 2 is very much aimed at being tooling friendly, giving it, as commented in a recent podcast I listened to, more of a Visual Basic RAD feel with proper components each with a lifecycle, inputs and outputs [though I know of no tools so far that support it]. Moreover there is support for server side rendering of the initial page to avoid the delays as the page boots [so called Angular Universal], and also the possibility of the page using web workers for doing the computation.

Chapter 1 of the book is really good discussion of the current state of the web, and the lessons learned from experience with Angular 1, all of which have lead to the modified design of Angular 2. Some concepts from Angular 1, like the notion of scope have been removed, some important concepts like dependency injection have kept but made easier to use, and the framework has been redesigned to make it easier to support server-side rendering.

Chapter 2 takes us through the basic concepts of Angular 2. There’s a big emphasis on building user interfaces via composition. Indeed in Angular 2, we have Directives which have no view and Components which do. Services and Dependency Injection still play a role, but features such as change detection are much simpler and more user optimisable – detection can be turned off, customised in various ways, and the framework also knows about various persistent [functional] data types which make change detection much quicker. The whole digest cycle of Angular 1 is gone – zones, which are explained here, can be used to capture state transitions that may change the state of the user interface. Templates remain for Components to express their views, though filters have been replaced by a richer notion of Pipes.

Angular 2 aims to be a lot more declarative [think tool support again]. Though everything can be transpiled down to more primitive versions of JavaScript, there is an emphasis in the book on using TypeScript which already supports some proposed extensions to JavaScript such as decorators. Chapter 3 of the book takes us through the TypeScript language, with its classes, interfaces, lambda expressions and generics.

Chapter 4, Getting Started with Angular 2 Components and Directives, digs into the building blocks of your Angular applications. We start with a basic hello world application, and the author explains how to get it running using code from his GitHub repository. We then move on to a ToDo application. This application emphasises breaking down the application into components that are connected together. For example, the input box for adding a ToDo, raises an event that is handled by another component in the GUI. The chapter covers projection and the ViewChildren and the ContentChildren decorators. This chapter also takes us through the lifecycle of the component, describing the eight basic lifecycle events that the component can optionally handle, and then the tree style change detection that we now get from the framework – no more multiple change detection passes bounded with a maximal number of passes.

Chapter 5 goes through dependency injection in Angular 2. We now use an injector [which can have a parent and children] to control the scope of injectable classes, and we use decorators to declare the injected items. Again, this decorator syntax can be expressed using a lower level internal DSL if you do not want to use these higher level facilities.

Chapter 6 looks at Angular Forms, which allow you to write the type of GUI you’ll need for the CRUD parts of your application in a fairly declarative manner, explains the validator framework and how you can write your own validators. The second part of the chapter looks at routing.

Chapter 7 explains pipes and how you communicate with stateful services. In this part we have a quick look at async pipes and observables.

The last chapter looks at server side rendering and the developer experience. There are now command line tools for quickly starting a new Angular project.

I though the book was a really good read. It seemed to cover the concepts quite concisely, and the examples made the concepts clear and understandable. It emphasised modern JavaScript and showed the flaws of the Angular 1 design. Now I need to put this all into practice writing some large application.

Posted in Uncategorized | Leave a comment

Vagrant, up and running

I’d obviously heard of Vagrant a long time ago, but only used it in anger for the first time a few weeks ago [when playing with MirageOS]. I decided that I need to understand a little more about how it works, so I bought the book Vagrant: Up and Running by Michael Hashimoto.

The book is fairly short at only 138 page, and is really a guide to the various commands that Vagrant offers, together with a some example use cases. The introductory chapter discusses the need for Vagrant and desktop virtualisation. Chapter two walks us through the creation of a local Linux instance using Vagrant’s init and up commands. Chapter three looks at provisioning, and the example here is generating an image with Apache serving a web site. Chapter four looks at networking, extending the example by having the web server talk to a database. Chapter five looks at multi-machine clusters, showing how easy it is to provisions a group of machines that emulate a production deployment. Chapter six talks about the concept of boxes.

Chapter seven of the book talks about extending Vagrant using plugins. This is the section of the book that I was looking forward to. The previous chapters covered the kind of things that you could do with Vagrant, and I was interested in how Vagrant actually does its stuff, but sadly this chapter doesn’t really go into quite enough detail, rather concentrating on how you’d add new features to Vagrant rather than explaining the implementation.

Fortunately the source of Vagrant is available on GitHub, and it is a mass of Ruby code that is fairly easy to read. The default provider for Vagrant is the one for VirtualBox and the code for this provisioner can be found here. It turns out that the various Vagrant commands end up using the vboxmanage command line executable that is installed as part of the VirtualBox installation. Hence Vagrant is really an abstraction across a broad set of providers, allowing you to use them without having to worry about their specifics – a very clever and useful idea.

The book is a good, informative quick read and gives you an idea of what Vagrant can do for you. You can then dig into the implementation and details after reading it.

Posted in Uncategorized | Leave a comment

Just how serious is a bug

I was thinking the other day about how hard it is to evaluate the impact of a bug fix. You have a bug report and determine the fix for it – just how do you then weigh the impact of the bug fix against the instability that this might cause if you release it? And I think that this is a very hard call.

I came up against this problem nearly a year ago. Microsoft were just about to release the 4.6 version of the .NET framework, and we were lucky enough to get a beta version to try. Several of us installed this beta onto our development machines, and continued working as normal. One of the testers in the team noticed that a PDF rendering component that we were using, and had been using for years, was no longer laying out the graph but was putting various elements in the wrong position. Mysteriously, it seemed to work on other people’s machines, and also seemed to work when we ran the application inside Visual Studio, but not when we ran it inside Windbg. We also didn’t see any failures if we built as x86, and so we spent a while checking whether previous builds of the product had accidentally been x86. It was only when I was doing the washing up that night, that I twigged that this is exactly what happens if you have a JIT bug. Running inside VS is going to turn off some of JIT optimisations, whereas running inside Windbg is going to leave these optimisations turned on.

The next morning I went in to work and verified this by setting the application config to use the legacy JIT, and sure enough the bug didn’t happen. It was then a case of gathering more data and isolating the method that was giving the wrong result. This turned out to be a point where the JIT made an optimised tail call. I therefore reported this on Connect. As is usual, I was then asked for a self contained reproduction, which I supplied the next day. Time passed and then the issue was marked as fixed, with the fix being noted as available in a later release of 4.6

Around a month later .NET 4.6 was released to the world. And this blog post came out. A x64 tail call was affecting ASP.NET code, and it turned out that this was the same issue that I had reported.

The question is: how do you gauge the impact of a bug like this and decide whether the fix goes out straight away, or whether you test it a lot more and release it then? In the advisory, Microsoft said that they had run lots of in house code and hadn’t found a manifestation of the issue. However, that might not have been quite good enough, as people can take such a bug and try to convert it into an exploit – see the comment on the thread in the CoreClr issue where the return address can be obtained – so if you get unlucky and there’s a widely available framework that allows an exploit to work, you probably do need to push out a fix. There is also the issue that the .NET framework is often used to run code written in C#, which probably doesn’t have a lot of tail call optimisable method invocations, but it can also be used as runtime for languages like F# where the design patterns lead to lots more tail calls happening [as you recur down algebraic data types].

Pondering the issue, it seems to me that it is really hard to get enough data to inform your decision. What’s the ratio between CLR runs of F# programs compared to C# programs? What’s the percentage of C# programs that would hit this issue? And F# programs? What’s the chance of the bad compilation being turned into an exploit?

In the end I suspect it just comes down to a call by a product manager who guesses the severity, and then uses feedback to determine if the call was right – obviously the more “important” the sender of the feedback, the more weigh they are given. And that’s a shame.

Posted in Uncategorized | Leave a comment

What do I need to know about Bitcoin?

There have been several bitcoin discussions at work of late… what actually is Bitcoin? How can the underlying blockchain technology be used for other purposes? I did the Coursera course on the topic some time ago, but these questions pushed me to go a little deeper into the world of BitCoin. I’d thought I’d try to summarise the articles I found useful here.

First, the book associated with the Coursera course has just been made available. I thought this book was a great read. It covers the basic cryptography that you need to understand, and gives some example crypto currencies which don’t quite work, before moving on to explain how the original bitcoin design fixed these issues. The course is very broad, with lectures discussing the motivation of the miners, who need to devote large amounts of computing power [and hence resources] to the network, in the hope of being the chosen node who generates the next accepted block of the transaction chain, and who can therefore gain the reward for the block including the associated transaction fees.

The course also covers the tricks of representation which can be used to allow the block chain to represent the ownership of various assets. I’d overlooked this aspect of the system when I first did the course, but the ability to associate bits of extra data with a transaction has led to extra uses for the block chain. For a small transaction fee that is paid to the miners, a user can record information, say a cryptographic hash of a document, into the block chain, and the permanent ledger property of the block chain can be used to prove that the document was published at a time in the past.

For an interesting idea of the history, I next read Nathaniel Popper’s Digital Gold: BitCoin and the Inside Story of the Misfits and the Millionaires Trying To Reinvent Money. This book covers the story, from the initial idea and paper published on a mailing list by Satoshi Nakamoto to the set of programmers who picked up on it and used their computers for the initial mining efforts. Eventually others invested in the idea, offering real world items in exchange for the virtual currency, and after some time there were investments in activities such as brokering firms who took the risk of converting  real currencies into virtual bitcoins. The book covers the history of Silk Road, the illegal drugs site,  which uses the anonymous nature of bitcoin to allow the trade of items such as drugs. Popper’s book is interesting – it narrates the history in detail [which gets a little tiresome at times] and also tries to explain how a virtual thing can actually have a real world value. The initial miners’ rewards for their mining efforts, including the blocks mined by the creator of bitcoin, are now worth considerable amounts of real world currency.

Obviously you need to know that your transactions are safe and there are loads of papers out there that analyse the safety of the currency. I enjoyed this Msc thesis which used Google App Engine to do various experiments. For an idea of the representational tricks that mean you can use the block chain for recording ownership of things such as cars, OpenAssets is one encoding you could take a look at.

Of course, you probably want to have a play to understand all of this. I started out trying to find where I could get some bitcoins. There are various faucets that occasionally give away free bitcoins, and many online bitcoin wallet management services, but I wasn’t sure if I really wanted to sign up. Fortunately there is a test version of the bitcoin block chain that is used for testing the various clients –  so you have the ability to do the transactions on a set of bitcoins that you can get for free from the Testnet faucet, though the coins have no actual value.

You’re also going to need a library to program against bitcoin, and  for this I selected NBitcoin, whose author has a free book on how to use it.  It is available as a NuGet package so you can easily use it from Visual Studio.

First we need to get a private/public key pair as an identity, from which we can get an address that we can type into the testnet faucet to get some bitcoins. We generate a new key and serialize it to a file for use later. We can then read the key back and generate a bitcoin address. We can type that into the faucet, and we’ll be notified of the pending transfer.

There are lots of websites that let you look up data from the block chain. I can use one to look at the coins owned by my key and look at the details of the transaction that populated it. We can see the process of block chain at work here. The transaction is initially unconfirmed as it makes its way across the peer to peer network until a miner picks it up and it becomes part of the block chain, though you need to wait until the block containing your transaction is several levels from the chain’s head before you can be confident that it will remain [typically 6 levels with one level every 10 minutes on the real block chain]

The NBitcoin library is powerful. It contains a lot of utilities to work with various block chain information sources and contains extensions for dealing with asset management via coloured coins. I quickly tested it out using some C# to transfer some bitcoins associated with key I had generated to another key. With bitcoin, one takes an output of a previous transaction, so I needed to fetch the transaction using the transaction id of the transfer from the blockr block chain information site. I split the output of the transaction that gave the money to me into three outputs: the majority gets transferred back to my account, a second small amount goes to the second account, and I add a little information to the transaction that becomes part of the block chain ledger and could be used to record asset transfer. Any remainng bitcoin is used as a fee for a miner, encouraging one of the miners to include the transaction in the block chain. We can see the results on one of the chain exploration sites.

In the C# code I needed to access the peer network to get my transaction to the miners. You can get a bitcoin client from this site. Running the bitcoin client for a while,

bitcoind -testnet

generated a list of peers in the peers.dat file which I could then use with the NBitcoin library to push my transaction out to the world. Alternatively the library can use a running local node, but I didn’t want to leave the node running so instead decided to use the peer data file. There’s lots of documentation here to discover how you can use the other utilities included in the download.

The block chain idea is fascinating – a distributed ledger with an immutable history, and there are many people trying to find uses for it. One example is the notion of side chains, which manage content but include a pointer to this data from the block chain by using a hashed signature which is included in the real block chain. There’s loads more experimenting to do, and I’m sure there are many interesting discoveries to come.

 

Posted in Uncategorized | Leave a comment

Your .NET applications can start being more native

I was interested in trying out the dotnet command line interface and seeing how it all works. Microsoft have after all just told us that they will be delaying the release of DNX to allow the integration into the CLI model. You can download a build of the comand line tools from here and it’s really easy to get going.
You can generate a demonstration “hello world” project using
dotnet new
You then get all of the packages it depends on using
dotnet restore
And then build it using
dotnet compile
Now it’s business as usual and you can run it as normal
cd bin\Debug\dnxcore50
dotnet2.exe [named as this because I was in a folder named dotnet2 when I created the project]
What excited me more about the command line tools though, is that they now have started offering the opportunity to compile .NET applications to native code. Be warned though that this is very early functionality and they only guarantee support for small hello world applications.
You can choose to compile using an ahead of time version of the normal JIT compiler [on x64] or can choose to go via generated C++ code. I wanted to see the kind of native code that can be produced and therefore chose the latter option.
dotnet compile –native –cpp
[Note that you have to be in a VS2015 x64 Native Tools command window to get the right tools available on the PATH]
This generates an executable in a native subdirectory of bin\debug\dnxcore50\native
dotnet2.exe
which runs very quickly – there’s no jitting in order for the application to get running, and it is noticeably quicker on my fairly old laptop.
The demo application is a very simple hello world, and you can find the emitted c++ in the directory
obj/Debug/dnxcore50/native/dotnet2.cpp
I was interested in how the GC got linked into the project, particularly as I had heard of
CoreRT and couldn’t see any appropriately named dll when I attached windbg to the running executable.
I therefore modified the code to generate garbage [and built it in a folder named dotnet so ended up with an application named dotnet.exe]
public static void Main(string[] args)
{
Console.WriteLine(“Hello World!”);
for (int i =0; i < 100000; i++)
{
var x = new object();
}
Console.ReadLine();
}
The generated Main method takes the form
#line 8 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
void dotnet::ConsoleApplication::Program::Main(System_Private_CoreLib::System::String__Array* args){int32_t i=0; System_Private_CoreLib::System::Object* x=0; uint8_t _l2=0; _bb0: {
#line 8 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
#line 9 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
void* _1=__load_string_literal(“Hello World!”); System_Console::System::Console::WriteLine_13((System_Private_CoreLib::System::String*)_1);
#line 10 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
i=0; { goto _bb28; }; } _bb16: {
#line 11 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
#line 12 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
void* _8=__allocate_object(System_Private_CoreLib::System::Object::__getMethodTable()); System_Private_CoreLib::System::Object::_ctor((System_Private_CoreLib::System::Object*)_8); x=(System_Private_CoreLib::System::Object*)_8;
#line 13 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
#line 10 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
int32_t _10=i; int32_t _11=_10+1; i=_11; } _bb28: {
#line 10 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
int32_t _3=i; int32_t _4=_3<100000; _l2=_4; int32_t _6=_l2; if (_6!=0) { goto _bb16; }; } _bb40: {
#line 14 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
void* _7=System_Console::System::Console::ReadLine();
#line 15 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
return; } }
Unfortunately, running the application with a debugger attached I got an access violation
dotnet!WKS::gc_heap::mark_object_simple1+0x180:
00007ff6`c2a4a460 4d8b01          mov     r8,qword ptr [r9] ds:0000003c`00032000=????????????????
0:000> k
# Child-SP          RetAddr           Call Site
00 0000003c`7715f3c0 00007ff6`c2a4ae60 dotnet!WKS::gc_heap::mark_object_simple1+0x180
01 0000003c`7715f430 00007ff6`c2a2e850 dotnet!WKS::gc_heap::mark_object_simple+0x1e0
02 0000003c`7715f480 00007ff6`c2a283be dotnet!WKS::GCHeap::Promote+0x90
03 0000003c`7715f4b0 00007ff6`c2a21aa9 dotnet!GcBulkEnumObjects+0x2e
04 0000003c`7715f4e0 00007ff6`c2a14c6c dotnet!Module::EnumStaticGCRefs+0x69
05 0000003c`7715f540 00007ff6`c2a4b233 dotnet!RuntimeInstance::EnumAllStaticGCRefs+0x6c
06 0000003c`7715f5a0 00007ff6`c2a43606 dotnet!WKS::gc_heap::mark_phase+0x193
07 0000003c`7715f630 00007ff6`c2a432e3 dotnet!WKS::gc_heap::gc1+0xd6
08 0000003c`7715f690 00007ff6`c2a2d823 dotnet!WKS::gc_heap::garbage_collect+0x753
09 0000003c`7715f6f0 00007ff6`c2a5a629 dotnet!WKS::GCHeap::GarbageCollectGeneration+0x303
0a 0000003c`7715f740 00007ff6`c2a2c4ee dotnet!WKS::gc_heap::try_allocate_more_space+0x1b9
0b 0000003c`7715f780 00007ff6`c2a194ac dotnet!WKS::GCHeap::Alloc+0x5e
0c 0000003c`7715f7b0 00007ff6`c29d9b53 dotnet!RhpNewFast+0x5c
0d 0000003c`7715f7e0 00007ff6`c29d3132 dotnet!System_Private_CoreLib::System::Runtime::InternalCalls::RhpNewFast+0x13 [c:\users\clive.tong\desktop\dotnet\obj\debug\dnxcore50\native\dotnet.cpp @ 39893]
0e 0000003c`7715f810 00007ff6`c29ee993 dotnet!System_Private_CoreLib::System::Runtime::RuntimeExports::RhNewObject+0x92 [c:\users\clive.tong\desktop\dotnet\obj\debug\dnxcore50\native\dotnet.cpp @ 37660]
0f 0000003c`7715f890 00007ff6`c29dc783 dotnet!RhNewObject+0x13 [c:\users\clive.tong\desktop\dotnet\obj\debug\dnxcore50\native\dotnet.cpp @ 37666]
10 0000003c`7715f8c0 00007ff6`c29d26fe dotnet!dotnet::ConsoleApplication::Program::Main+0x53 [c:\users\clive.tong\desktop\dotnet\program.cs @ 12]
11 0000003c`7715f930 00007ff6`c29ef1ea dotnet!dotnet::_Module_::StartupCodeMain+0x4e [c:\users\clive.tong\desktop\dotnet\obj\debug\dnxcore50\native\dotnet.cpp @ 37467]
12 0000003c`7715f980 00007ff6`c2a62718 dotnet!main+0x4a [c:\users\clive.tong\desktop\dotnet\program.cs @ 5676]
13 (Inline Function) ——–`——– dotnet!invoke_main+0x22 [f:\dd\vctools\crt\vcstartup\src\startup\exe_common.inl @ 74]
14 0000003c`7715f9d0 00007ff9`d5872d92 dotnet!__scrt_common_main_seh+0x124 [f:\dd\vctools\crt\vcstartup\src\startup\exe_common.inl @ 264]
15 0000003c`7715fa10 00007ff9`d5b39f64 KERNEL32!BaseThreadInitThunk+0x22
16 0000003c`7715fa40 00000000`00000000 ntdll!RtlUserThreadStart+0x34
Notice that the source locations show that many of the frames are compiled versions of the C++ code that we found in the obj directory, but the other parts of the runtime are linked in and there is no source location.
Running the command line with the -v option, you can see the files that are passed into the cpp compiler, and you can see that lots of them come from the dotnet sdk directory
Running C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\..\..\VC\bin\amd64\link.exe “/NOLOGO” “/DEBUG” “/MANIFEST:NO” “/IGNORE:4099” “/out:C:\Users\clive.tong\Desktop\dotnet\bin\Debug\dnxcore50\native\dotnet.exe” “kernel32.lib” “user32.lib” “gdi32.lib” “winspool.lib” “comdlg32.lib” “advapi32.lib” “shell32.lib” “ole32.lib” “oleaut32.lib” “uuid.lib” “odbc32.lib” “odbccp32.lib” “C:\Program Files\dotnet\bin\sdk\PortableRuntime.lib” “C:\Program Files\dotnet\bin\sdk\bootstrappercpp.lib” “/MACHINE:x64” “C:\Users\clive.tong\Desktop\dotnet\obj\Debug\dnxcore50\native\dotnet.obj”
The PortableRuntime here is the actual CoreRT code. If you clone the CoreRT project and build using the build.cmd script, you can indeed make your own version of this SDK.
It includes code such as a garbage collector and a runtime that is fairly sophisticated, containing such features as thread hijacking which is implemented in this file . There are loads of interesting comments if you browse the code.
It will be interesting to see how this portable runtime turns out… it’s certainly true that no jitting makes start up time much better, but it isn’t clear to me how features like Reflection are going to be supported. It will also be interesting to see how the debugging experience works when debugging AOT compiled code.
While we are on the subject of .NET, I’d like to recommend these talks from NDC London. A talk about the new CLI and a talk on the history of ASP.NET which goes into some detail about how things have changed over the years. There is also this talk on the implementation of SignalR from several years ago.
Posted in Uncategorized | Leave a comment

Another good DevWinter

It was DevWinter last weekend, and as usual there was an interesting set of talks.

The “my adventure with elm” talk was good. The speaker gave a very brief introduction to Reactive programming, a brief introduction to the Elm language and then implemented the Snake-and-apples game in Elm inside a browser on one of the Try Elm sites. Apart from a couple of times when he needed to uncomment a couple of pre-written functions, he wrote the whole application in front of us. This was a good introduction to the language and a very well presented talk.

The “anatomy of a catastrophic performance problem” was also very good. A witty presentation of a real life performance problem, which showed how frequently us developers think we’ve analysed the problem and hence push out a fix, without trying the fix on a reproduction [which wasn’t available in this case].

In the afternoon, I attended the double session on ClojureScript. This wasn’t very hands on in the end, and the presenter spent a lot of time discussing the Clojure language and the various benefits of functional styles of programming as well as selling the advantages of transpiling to JavaScript rather than writing in JavaScript in the first place. The presenter did use the lein Figwheel plugin to get a REPL connected to a ClojureScript session running inside a browser which also reloads when modifications are made to the source. This is all build using the lein tool, and getting started is as simple as typing:

lein new figwheel myproject
cd myproject
lein figwheel
… wait for compilation to happen
Open browser on port http://localhost:3449
… and the REPL will connect to the browser
(js/alert “Hello”)
… and the alert dialog is raised inside the browser

If you then open the developer console in your browser and make an edit to any of the project’s ClojureScript files, you will see the modified code reloaded into the browser session allowing quick development turnaround.

The best talk of day was “my first unikernel with mirage os” by Matthew Grey. This was a hands on session based on the code in the speaker’s GitHub repository.  I’d been meaning to play with Mirage for some time, as it is a perfect match with some of my interests – operating systems written in functional languages [which I first read about when I spent a year working in industry before university where I spent some free time playing with LispKit lisp] and hypervisors. The idea is that you can take an application written in OCaml and produce a Unikernel from it. The Mirage team have made it possible to deliver a UniKernel, that runs on top of Xen, very easily from an OCaml application. The Mirage team have also implemented various utilities such as a TLS library and web server that your application can use. Matt Gray’s repository contains a vagrant script that can be used to get a Ubuntu development environment that is suitable for playing around with Mirage. Once you have this running inside VirtualBox, it is easy to get the various examples running.

The speaker gave a brief overview of Unikernels and then helped the audience to get going. There was another talk on Mirage in the afternoon, but I didn’t attend that.

What did I enjoy about DevWinter? The range of the talks, on all kinds of topics. Unlike the typical Microsoft event I go to, the talks cover a range of topics that are interesting and are aimed at what might happen in the future. I also very much enjoyed the developer experience talks. A very nice venue makes this a great way to spend a Saturday twice a year.

Posted in Uncategorized | Leave a comment

Actors have become popular again

Reactive messaging patterns with the Actor model: Applications and Integration with Scala and Akka by Vaughn Vernon

I very much enjoyed this book’s discussion of Actors and the reasons why the Actor model is well matched with modern applications, though I enjoyed the book less when it got into the various patterns.

Chapter one talks about the design of modern Reactive applications and then discusses the origin of the Actor model. It’s vey good. Chapter two gives a brief introduction to the Scala programming language and the Akka library… actor systems, supervision, remoting and clustering are all explained really well. Chapter three discusses the need for scalability with a good discussion of how clock speed is no longer increasing but instead we are given many more cores for our applications to use. It is hard to write multi-threaded code, and the Actor model provides a simple model where we don’t need to worry about memory barriers and locking – though the issues associated with locking including deadlock and live lock can still manifest themselves in an application at higher levels.

From Chapter four onwards, the author lists a large number of patterns for using actors. There’s a discussion about each of the patterns and an example written using the Akka test framework that demonstrates it. Patterns such as Message Router, Publish Subscribe Channel, Scatter-Gather and Service Activator. I must admit that I found it hard work to go through each of the patterns, read the discussion and then understand some of the multipage examples… I will probably go back to some of the examples when I get more time.

I did learn a lot along the way. For example, I was interested how a modern application’s need for guaranteed message processing would map to actors and their message delivered “at least once” guarantees. This is covered by the Guaranteed Delivery pattern which uses the Akka persistence and AtLeastOnce traits to store messages that haven’t been acknowledged and which hooks the Actor restore protocol to ensure that nothing gets lost if actors are restarted.

After reading this book, I went and read one of the early Phd theses on the subject – Gul Agha –Actors: A Model of Concurrent Computation in Distributed Systems. There are versions of this available for download if you Google. It’s interesting to read how the Actor model was pushed as a model for computation, with lots of effort in the thesis to give the model a rigorous semantics, and also interesting to see the emphasis on the become operation which allows the Actor to change the message processing function. This feature is not something that gets pushed in the modern interpretation of the model.

There is an open source project to make the Akka framework on the .NET platform.

There is also Pony, a language and runtime with Actors at the very core. I arrived at it via this interesting talk. There is a lot of interesting technology associated with the language – such as a type system that makes it possible to prevent aliasing of message data, as well as interesting features of the runtime such as the way it detects unreferenced actors.

Posted in Uncategorized | Leave a comment