How do you nest your virtualization?

I was looking through some of the YouTube talks from Ignite, when I came across this interesting talk on nested virtualization in Hyper-V. Since September you have been able to provision virtual machines on Azure which support nested virtualization. This is obviously a very powerful feature, and enables many scenarios (such as testing) which you couldn’t easily do before.

This made me start thinking about how you get nested virtualization to work on other platforms such as AWS. I’d come across virtualization using binary translation in the past (as that was the way that VMWare did its thing back in the day), and came across this fairly recent paper that talks about the method. The resulting virtualization which can run in a cloud environment is covered in the paper.

That then leads to the question of whether software implementation can compare with the hardware assisted virtualization, and there are some papers such as this one that study the problem. Hardware support on Intel requires some extensions, the so called VT-X extensions, which are available on more modern processors and which make things a lot easier for the implementation.

Advertisements
Posted in Uncategorized | Leave a comment

Bitcoin – where does it go next?

Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Ethereum & Smart Contracts by David Gerard

I did a fair bit of reading about BitCoin in the past (after my interest had been piqued by the Coursera course on the topic), and have spent some time following the various newsgroups and issues, but have been troubled about whether Bitcoin stands at chance of succeeding in the real world.

This book is really good. It takes a massively anti-Bitcoin and associated technologies stance, and puts forward really good arguments about why Bitcoin is a massive fad.  As usual the truth is probably somewhere in the middle, but the author’s arguments about the troubles of scaling Bitcoin really make it seem useless – it can handle 7 transactions a second compared to Visa’s 50000, and many vendors gave up with it because of lack of interest, never mind the fact that it takes so long to verify a transaction that it is an impractical method of buying things for many types of transaction.

The author also gives some examples of where smart contracts have turned out to be anything but smart. He points out that legal contracts all suffer from interpretation and the regular need for arbitration, and so any kind of contract whose meaning is defined by a segment of code is never going to work at the edges where the contract depends on inputs from the real world.

There is also a good set of arguments as to why private blockchains fail to hit the mark. The current Blockchain burns as much power of Ireland, with much of the proof of work being used to randomise the miner who controls the transaction. Once you move to a private blockchain, you regain the centralisation which was one of the main selling points of blockchains, so you might just as well go back to a database instead… indeed, lots of people do not want their transactions listed in detail on a public medium so any kind of global leger is unlikely to gain traction.

In summary, a thought provoking, short read. Like many things, the technology is clever and it is really a question as to whether there is a place for it in the real world.

Posted in Uncategorized | Leave a comment

Some books since last time

For some reason I just haven’t got around to blogging for a long while, but fortunately I have had time to read a fair number of computing related books which I thought I would type up here. There are a couple of management related books thrown in for good measure.

Managing Humans by Michael Lopp

This is a collection of stories about the author’s experiences managing development teams. A fun, humorous read, which made it even clearer that management is a lot about applying common sense to a range of activities.

Troubleshooting with the Windows Sysinternals Tools by Mark Russinovich and Aaron Margosis

The sysinternals tools are amazing, with specific tools offering information that would be hard to dig out yourself. This book takes you through the various tools one-by-one and tells you many of the lesser known features of the tool. The book contains a large section of case studies on how the tools were used to diagnose a wide range of problems.

Smalltalk-80 Bits of history, words of advice by Glenn Krasner

I remember how much I enjoyed this book when I first read it 30 years ago. Lots of chapters written by different people on topics around the early Smalltalk systems, including details about implementations, porting efforts to get the standard image to run on diverse sets of hardware, and discussions of improvements to the system for the future.

Understanding Computation by Tom Stuart

This book uses implementation in Ruby as a way of understanding computation from formal semantics to automata theory to Turing machines. And you get to learn some Ruby along the way. I really enjoyed this book. The writing is interesting, and you really don’t ever to understand something until you implement it.

The One Device – the secret history of the iPhone by Brian Merchant

A very interesting read on the history of the iPhone. The book has various chapters on the various components, such as the battery and the screen, giving lots of interesting background about each area.

Hit Refresh – The quest to Rediscover Microsoft’s Soul and Imagine a Better Future For Everyone by Satya Nadella

An interesting read, part-autobiographical about Microsoft’s CEO. We learn some details about Nadella’s early years and the groups that he worked with when he joined Microsoft. There are chapters that talk about where Microsoft is going in the future – Mixed reality, artificial intelligence and quantum computing. I’m not sure I learned as much from the book as I had hoped, but worth a quick read.

Debugging Applications for Microsoft .NET and Microsoft Windows by John Robbins

This book is now fairly old and I was lucky to find a copy for one pound in a charity shop. Some good advice on debugging, and some nice debugging war stories, from a renowned conference speaker. Some of the .NET related material is a little out of date, but there’s still enough information to make this a fun read.

Developing Windows NT Device Drivers by Edward Dekker and Joseph Newcomer

Again a fairly old book that you can pick up quite cheaply. Lots of insights into the world of device drivers, and along the way a lot of insights into the implementation of the windows operating system. A very good read.

Type-driven Development with Idris by Edwin Brady

There are more and more discussions on dependent types on the various news feeds that I subscribe to. Idris is a Haskell -like language designed for type driven development, a system where the dependent types can be used to specify the desired solution, and the system can attempt to do some primitive theorem proving to verify the dependent type constraints. The book is a good introduction to the idea and to the language itself.

 

 

 

Posted in Uncategorized | Leave a comment

‘Tis the season for loads of Haskell

The Xmas break is a good time to catch up with things. At the functional lunch at work we’ve been working though the NICTA Haskell course exercises which take you through the implementation of many pieces of the Haskell libraries via a series of questions with associated tests. We’ve just done the parts on Functor, Applicative and Monad. The questions were all really interesting, but I felt I needed a better introduction to the material. Luckily I came across the online version of LearnYouAHAskell and the sections on Functors, Monads and more Monads which are very good introduction to the relationship between the various concepts. After reading generally, useful Monads such as the State Monad have numerous wiki pages devoted to them.

Of course in Haskell, Monads are not quite enough. When you write anything complicated, you end up needing to have multiple Monads (think IO and some other Monad such as the exception Monad), so it is also important to get a grip on Monad Transformers, which let you deal with a stack of Monads. There is a reasonable introduction to these on the Haskell wiki, and there is a list of the standard transformers on the typeclassopedia. Somehow the use of Monad Transformers still feels like magic, even when they are introduced one by one like in this paper.

Of course, you might ask why Monads actually matter and this blog post covers the four problems that Monads attempt to solve. Though I’m not sure I fully understood the Kleisli category.

Once you want to play with Haskell, the obvious tool to use is GHC. The really cool thing is that the source to GHC is available on GitHub, so you can look at the actual implementation of all of the Monads described in the previous articles. Linked form the Haskell wiki page are a few blog articles such as the 24 days of GHC extensions, which includes articles on the more experimental features such as  Rank N types and  Bang Patterns.

I very much enjoyed having more time to play with GHCi, the interactive REPL which sits on top of the GHC compiler. There were a few commands that I hadn’t noticed before, in particular the single stepping which allows you to set breakpoints that are triggered when a particular piece of code is evaluated (and with Haskell being a lazy language the point of evaluation is often not very clear). This would have made life a bit easier in the past. For example, in this blog post on the strictness of Monads, in the example the author uses trace to report when a thunk is evaluated. Using the stepper in GHCi, it is possible to get the debugger to show us this.

unravel

By setting breakpoints, we can see the laziness, and in particular that doSomethingElse is evaluated before doSomething without needing to change the code and use the trace printing.

somethingelsefirst

Moreover, the debugger lets you see how much of the various data values have been evaluated as the printing in the debugger (using :print) does not force the evaluation. This makes it a really good way to understand the evaluation to whnf. You can force the evaluation too using the :force command, though this obviously changes the behaviour of the program you are debugging.

lazy

All considered I learned a lot about Haskell over the holiday. There are many papers and book still to read but the language continues to bring a lot of novel concepts out into the open. And it is hard not to find it fascinating.

Posted in Uncategorized | Leave a comment

A dump has so many uses

I’d been meaning to write a blog post about Windows dump files for a while now. I’ve used them at work a number of times. For example, when I worked on the CLR profilers, a dump file was a natural way to debug issues when there were problems with the COM component that we would inject into the profiled process. With the appropriate symbols, which you ought to have saved away to an internal symbol server, this was a good way of getting information on the issue.

I’ve used dump files in other scenarios too. When our cloud application seemed to be taking too much CPU when in production use, it was easy to take a dump of the process – the dump stalls the process only for a short amount of time so it is often possible to take a dump when an application is in production use. Better still, utilities like procdump from Sysinternals allow you to take a dump of a managed process when interesting events happen, so we can take a dump when a first chance exception happens for example and capture the state of the process when the exception is thrown.

I’ve used Windbg to load and study such files in the past, though Visual Studio supports them pretty well these days. Moreover, Microsoft have recently released a managed .NET library, ClrMD, which allows you to programmatically open and process such files, and the source is now open

Anyway, I was reminded of all of this when I came across a recent talk by Sasha Goldshtein, where he summarises the uses of dump files and shows some examples of the types of automated processing you could do using ClrMD. The talk also covers the many issues that dump files help you to debug (though you need to be slightly careful as the dump file may also contain sensitive data from the user’s process).

The Visual Studio experience when you open a dump file is quite impressive. The normal debugging windows are all available, so you can see the threads and the locals of the various methods. One thing that has anyways impressed me is that the locals window manages to display the property values of the various objects.

evaluate

I’d read in the past that this is possible because the VS code contains an IL interpreter, and wanted to prove this. I took the source code shown about and modified the property getter to throw an exception. Then, attaching WinDbg to the Visual Studio process and breaking on exceptions, I managed to stop at the following stack trace.

stacktrace

So indeed, Visual Studio contains code to allow the programming environment to simulate the execution of a property getter. That is rather cool!

Since I haven’t blogged for a while, I’d also like to point out some interesting articles I’ve read over Xmas. On the C# front there’s a good article on when you should avoid using async/await and another on the difference between rethrow and rethrow ex;

I’ve also been watching a load of Virtual Machine Summer School  and Smalltalk videos, and never fail to be impressed with the work that Smalltalk did to get VM technology into the mainstream. There’s this interesting talk on the history of VMs and advice from someone who has implemented many Java runtimes (and if you like the idea of a managed OS then this talk on COSMOS is also very interesting), plus a load of talks on improving the VMs performance and the many contributions of Smalltalk.

Posted in Uncategorized | Leave a comment

At last you can prove your understanding of the C# specification

There was some discussion at work about the line of code:

IEnumerable<object> xx = new List<string>();

This line of code didn’t work in the initial versions of C#, even those with generics, because of the need for variance in the IEnumerable interface. [Variance allows the compiler to relate the instantiations of IEnumerable<A> and IEnumerable<B> when the types A and B have a subtype relationship]

Of course, when you’re discussing this kind of thing, it’s important that you can talk about the parts of the C# language specification that justify the steps the compiler is going to be taking. I believed that the conversion was because of an implicit reference conversion in the specification [6.1.6 Implicit Reference Conversion], but, of course, it’s really hard to be certain that this is the rule the compiler is going to use and that there isn’t some other rule which is actually being applied.

So what do we do?

I remembered reading how the Roslyn compiler had been written with an aim of keeping it very close to the C# specification, so that it was easier to verify that the implementation was correct. The Roslyn source is available here on GitHub and it’s fairly easy to build the compiler if you have Visual Studio 2015 Update 3.

You can then write a simple source file containing the above code, and set the csc project as the startup project with the debug command line set to point to this file. The various conversions that annotate the semantic tree are listed in a ConversionKind enumeration and it is fairly easy to find uses of the ImplicitReference enumeration member to see where the annotation is added to the tree. This gave me a way to breakpoint and then look at the call stack to determine where I should set a breakpoint and start stepping. [This isn’t always trivial because the call stack doesn’t really tell you how you got to a certain point, but rather tells you where you are going to go when certain method calls finish. These concepts are sometimes different (for example in the case of tail calls)]

For our example code, the key point is that we find the implicit reference conversion used in the ConversionsBase.cs file where we see a call to the method HasAnyBaseInterfaceConversion with derivedType List<string> and baseType IEnumerable<object>. When we walk across the interfaces of the derivedType argument by calling the method d.AllInterfacesWithDefinitionUseSiteDiagnostics, we’ll enumerate across the type IEnumerable<string> and the compiler will check that it is variance converable to IEnumerable<object> in the call to HasInterfaceVarianceConversion .

At this point the call stack looks like this:

ConversionsBase.HasAnyBaseInterfaceConversion
ConversionsBase.HasImplicitConversionToInterface
ConversionsBase.HasImplicitReferenceConversion
ConversionsBase.ClassifyStandardImplicitConversion
ConversionsBase.ClassifyImplicitBuiltInConversionSlow
ConversionsBase.ClassifyImplicitConversionFromExpression
ConversionsBase.ClassifyConversionFromExpression
Binder.GenerateConversionForAssignment
Binder.BindVariableDeclaration
Binder.BindVariableDeclaration
Binder.BindDeclarationStatementParts

What did I learn from this exercise?

There is now a C# implementation of the specification, so it is actually possible to check that you understand the parts of the specification that make some code valid. No longer do we guess what a C++ implementation of the compiler is doing, but we can animate the specification by stepping through the C# code. From the parts of the code that I have read, I’m not sure that I’d completely agree that the code follows the specification (making it easy to map from one to the other), but having an open source implementation does mean you can search for terms that you see in the specification to help you narrow down the search.

There are loads of other parts of the specification that I want to understand in more detail, so this is definitely an exercise that I am going to repeat in the future.

Posted in Computers and Internet | Leave a comment

Switching to Angular 2

Switching to Angular 2 by Minko Gechev

In the past I spent some time trying to get up to speed with Angular 1, and after some announcements at Xmas time about the progress being made on Angular 2, I decided it was time to see how things have changed between the major releases. I ordered this book, but had to wait until April before it was published. In the meantime Angular 2 has moved to being very close to full release – the recent ng-conf keynote explains this.

In short, the differences are very great… Angular 2 is very much aimed at being tooling friendly, giving it, as commented in a recent podcast I listened to, more of a Visual Basic RAD feel with proper components each with a lifecycle, inputs and outputs [though I know of no tools so far that support it]. Moreover there is support for server side rendering of the initial page to avoid the delays as the page boots [so called Angular Universal], and also the possibility of the page using web workers for doing the computation.

Chapter 1 of the book is really good discussion of the current state of the web, and the lessons learned from experience with Angular 1, all of which have lead to the modified design of Angular 2. Some concepts from Angular 1, like the notion of scope have been removed, some important concepts like dependency injection have kept but made easier to use, and the framework has been redesigned to make it easier to support server-side rendering.

Chapter 2 takes us through the basic concepts of Angular 2. There’s a big emphasis on building user interfaces via composition. Indeed in Angular 2, we have Directives which have no view and Components which do. Services and Dependency Injection still play a role, but features such as change detection are much simpler and more user optimisable – detection can be turned off, customised in various ways, and the framework also knows about various persistent [functional] data types which make change detection much quicker. The whole digest cycle of Angular 1 is gone – zones, which are explained here, can be used to capture state transitions that may change the state of the user interface. Templates remain for Components to express their views, though filters have been replaced by a richer notion of Pipes.

Angular 2 aims to be a lot more declarative [think tool support again]. Though everything can be transpiled down to more primitive versions of JavaScript, there is an emphasis in the book on using TypeScript which already supports some proposed extensions to JavaScript such as decorators. Chapter 3 of the book takes us through the TypeScript language, with its classes, interfaces, lambda expressions and generics.

Chapter 4, Getting Started with Angular 2 Components and Directives, digs into the building blocks of your Angular applications. We start with a basic hello world application, and the author explains how to get it running using code from his GitHub repository. We then move on to a ToDo application. This application emphasises breaking down the application into components that are connected together. For example, the input box for adding a ToDo, raises an event that is handled by another component in the GUI. The chapter covers projection and the ViewChildren and the ContentChildren decorators. This chapter also takes us through the lifecycle of the component, describing the eight basic lifecycle events that the component can optionally handle, and then the tree style change detection that we now get from the framework – no more multiple change detection passes bounded with a maximal number of passes.

Chapter 5 goes through dependency injection in Angular 2. We now use an injector [which can have a parent and children] to control the scope of injectable classes, and we use decorators to declare the injected items. Again, this decorator syntax can be expressed using a lower level internal DSL if you do not want to use these higher level facilities.

Chapter 6 looks at Angular Forms, which allow you to write the type of GUI you’ll need for the CRUD parts of your application in a fairly declarative manner, explains the validator framework and how you can write your own validators. The second part of the chapter looks at routing.

Chapter 7 explains pipes and how you communicate with stateful services. In this part we have a quick look at async pipes and observables.

The last chapter looks at server side rendering and the developer experience. There are now command line tools for quickly starting a new Angular project.

I though the book was a really good read. It seemed to cover the concepts quite concisely, and the examples made the concepts clear and understandable. It emphasised modern JavaScript and showed the flaws of the Angular 1 design. Now I need to put this all into practice writing some large application.

Posted in Uncategorized | Leave a comment