Adaptive Code Via C#: Agile coding with design patterns and SOLID principles

Adaptive Code Via C#: Agile coding with design patterns and SOLID principles by Gary McLean Hall

This is very much a book of three parts – a section named “An Agile Foundation” that covers basic scrum and some of the basic principles of modern software engineering, a section going into detail about each of the SOLID principles, and a section called “Adaptive sample” that tries to demonstrate how the techniques in the book might be used across a number of sprints, by describing two sprints worth of development on a chat client.

The first section is an introduction to agile practices. There is an introduction to Scrum which covers the roles and phases of a scrum project, emphasising how interaction with the customer drives the development process. This is followed by a chapter on dependencies and layering which goes into detail about how you should manage your project dependencies and even gives some brief coverage of topics such as aspect-oriented programming. Chapter three goes through interfaces and design pattern, showing how useful interfaces can be in the .NET world. Chapter four covers unit testing and refactoring – these two items are closely related as the tests give you confidence that refactoring have not broken anything, allowing you to tidy working code without risk. All interesting material which includes some good observations.

The second section of the book, and by far the majority of the book, covers writing SOLID code. The author takes each of the solid principles in turn with a chapter on each. The author discusses the meaning of the principle and then demonstrates its use. On the way there’s often lots of additional material. The chapter on the single responsibility principle, for example, has a lot of extra material on the decorator pattern, showing how they can be used for logging and the introduction of asynchrony, and the chapter on the Liskov substitution principle has some material on pre- and post-conditions. This section of the book is really very good.

I was less convinced by the adaptive sample. We follow conversations between team members as they write a simple chat application. This covers the initial planning for a sample application, and then the team carries out two sprints. The code is available on github so that you can follow along, but it felt like a lot of effort to understand the requirements and the way that the team was working.

The first two sections of the book are really good though, and I certainly learned something from each chapter.

Posted in Uncategorized | Leave a comment

Modern C++ Design: magic with templates

Modern C++ Design: Generic Programming and Design Patterns Applied by Andrei Alexandrescu

I’ve done some C++ programming in my time which has required me to write some small simple templates. However, just before going on holiday I noticed this book on the work bookshelf and thought it deserved a read. In the past, I’d come across posts implementing simple functions such as Fibonacci in C++ using templates in a way that gets them evaluated at compile time, but this book takes this much further and shows the tremendous power of type level computation during the compilation phase.

On reading the book, one important message came through – there’s a pattern matching functional language embedded into the C++ compiler which can be used to do some truly impressive feats, including many instances of compile time code generation of highly optimized code. The astounding creativity of the author, and the edge semantics of C++ that are needed to understand some of the constructs, makes this a fascinating read. Though I’m not sure I’ll ever get to use some of the items in the applications that I write.

Chapter one is a great read. The author discusses where features like multiple inheritance and templates fail when they are used on their own. He then goes on to explain where their combination can be very powerful – after all, the template expansion process can generate new classes and can determine how they are going to be mixed together in the multiple inheritance hierarchy. He calls this “policy-based class design”, and it’s a very clever technique. Chapters two and three introduce the main techniques that will be used later.  Chapter two goes through some advanced features of C++ templates including partial template specialization and local classes, as well as ways for detecting convertibility and inheritance at compile time. Chapter three describes a TypeList datatype which is used in many of the techniques to allow template expansion to work through the items of a list of types generating things as it goes. These two chapters are truly fascinating, and I wish I could fully understood all of the concepts that they cover.

The other 8 chapters of the book use some of these techniques to implement a number of patterns – small-object allocation, generalised Functors (the command pattern on steroids), implementation techniques for singletons emphasizing how policy selection allows you to pick implementation trade offs, smart pointers, object factories, abstract factories, the visitor pattern and multi-methods. In each case the author describes the problem, works thorough some of the various design decisions, and then offers several solutions. Many of these solutions build on the concepts in the first few chapters.

I learned loads about C++ in general, lots more about templates and got to grips with ideas around template metaprogramming. Some of the design discussions are also really interesting. Brilliant!

Posted in Uncategorized | Leave a comment

CLR Load contexts can make things confusing

I’ve never really spent a lot of time thinking about CLR binding contexts, but a colleague at work had a problem related to them that took a bit of thought to figure out. These posts, one on stackoverflow and one related to why adding to the private path of an AppDomain has been deprecated, help explain part of the issue and also why there is (maybe) a need for different contexts.

My colleague was writing a Visual Studio extension. The extension was loaded by VS, and his code created a new AppDomain and used remoting to create an application object in the AppDomain. He tried to cast the result of CreateInstanceAndUnwrap  into the type of his remote class and got an InvalidCastException.

After some investigation it turned out that this was a problem with type loading. The returned transparent proxy needs to validate the cast that is being made, which it does in the following methods (which were part of the stacktrace I grabbed using WinDbg and SOS).


The issue was that the initial assembly was loaded using LoadFrom, but when remoting returns a type it is going to be loaded into the Load context as remoting will use the name of the type to try to find it. This is essentially the same effect that you get if you run the following code:

var assembly1 = Assembly.LoadFrom(
   @”C:\Users\clive\documents\visual studio 2013\Projects\ClassLibrary3\ClassLibrary3\bin\debug\classlibrary3.dll”);

var fullName = assembly1.FullName;

assembliesLoaded = new List<Assembly>(AppDomain.CurrentDomain.GetAssemblies());
Debug.Assert(assembliesLoaded.Any(assembly => assembly.FullName == fullName));

assembly2 = Assembly.Load(fullName);

The assembly is part of the AppDomain’s assembly collection, but you cannot load it by name using Assembly.Load. In the above code we get a FileNotFoundException, which is the underlying exception that causing the cast to fail in the remoting example.

What’s the fix?

Well we have the assembly object in the collection of Domain assemblies, so we should just return it if the assembly resolution fails. If we add the following code in front of the Assembly.Load, then the assembly is loaded as expected.

AppDomain.CurrentDomain.AssemblyResolve += (o, eventArgs) =>
var assembliesInDomain = new List<Assembly>(AppDomain.CurrentDomain.GetAssemblies());
return assembliesInDomain.FirstOrDefault(
assembly => assembly.FullName == eventArgs.Name);

Essentially, we take control of the load resolution in the case where the CLR is trying to protect us by using the different contexts.

Posted in Uncategorized | Leave a comment

Some UWP and a little ClojureScript

I’ve been following Lucian Wischik’s interesting series of posts on writing Windows 10 applications in .NET. While I was on holiday he posted an article on writing NuGet packages for UWP but unfortunately the article was removed from the web site when I was only half a way through it. The place holder mentions that the NuGet team were going to talk about the issues and, sure enough, a post has now appeared on NuGet support for UWP in NuGet 3.1. In summary you will now be able to declare project references using a project.json (in much the same way as ASP.NET 5 projects). This will also use the transitive references idea, where you list only the top level projects that you require, and NuGet will take care of fetching their dependencies (and also resolve the appropriate versions when there are conflicts), in contrast to the current model where dependencies of dependencies make their way into your packages.config.

In other news, David Nolen has just announced on his blog that ClojureScript, the variant of Clojure that uses JavaScript as its target language, can now compile itself. Having been through the bootstrapping process myself for both ML and Common Lisp compilers, its always very satisfying when you can get rid of the other implementation, and finally get the compiler to work on itself. His post is rather cool, as it embeds some of the implementation of the translation process into the actual post, and you can run these in the browser that you are using to read the post. Some of this work sits on top of reader conditions, one of the two key features that were recently added to Clojure 1.7 (the other being transducers).

Finally, if Windows Update fails to install Windows 10 repeatedly), then this post has the answer, or rather it works for some people. For me, the relaunched Windows Update failed again, so in the end I needed to manually download by following this link.

Posted in Uncategorized | Leave a comment

C# await inside catch and finally leads to some interesting semantics

I remember when await was introduced into C# – the feature took us away from the callback hell that was developing, but it also required some understanding of the underlying code generation to really see what was going on in simple looking code. The abstraction led to a few rather confusing effects, such as always throwing the first exception of an AggregateException in the await.

At the time people were interested in why you couldn’t use await inside catch or finally blocks, and the answer always came down to confusing semantics. In the recently released C# 6, await is now available in catch and finally blocks, so the question is how this has been achieved without breaking the existing semantics. And I think the answer is that the semantics of some C# forms has now been changed in a way that again requires knowledge of the underlying transforms to understand.

Let’s take the async version of the standard thread abort construct. The call to Abort() notionally throws a ThreadAbortException which can be caught and processed by catch and finally blocks, but which has the interesting semantic of being re-raised when the processing block finishes (unless you reset the abort).

static async Task AsyncVersion()
Console.WriteLine(“Can’t stop me!!”);
Console.WriteLine(“After abort”);

We can explain the behavior of this code fairly easily. The Abort throws the ThreadAbortException, which is caught by the finally block. This finally block prints “Can’t stop me!!” and then the exception is rethrown when the finally block finishes. Hence the “After abort” is never printed.

Now change the finally block to

Console.WriteLine(“Can’t stop me!!”);
await Task.Run(() => Console.WriteLine(“Next”));

When you run this version of the code nothing is printed, and it’s hard to understand why without looking at the code generation. The async method is translated to a class that implements a state machine and, crucially, the finally is no longer a finally in the generated IL code, but is instead translated to a catch block that handles all exceptions.

Reflector shows us:

catch (object obj1)
obj2 = obj1;
this.<>s__1 = obj2;

ie the finally isn’t processed as a CLR finally, but is instead translated to a catch which re-enters the state machine processing. Of course, this means that the ThreadAbort exception which is rethrown at the end of any catch block is going to be rethrown too early, avoiding the exception of the code in the finally block.

I can see that it is a benefit to be able to await in catch and finally blocks, and I guess we’ve had to accept in the past that C# is not the assembly language of .NET. However, the code generation approach to implementing these high level constructs means that sometimes we can see through the abstraction as we can here. I’m not quite sure how important that is.

Posted in Uncategorized | Leave a comment

Advanced Topics In Types and Programing Languages

Advanced Topics in Types and Programming Languages  edited by Benjamin Pierce

I’d been meaning to read this book for a while, and managed to buy it with various present money after Xmas. It consists of ten chapters by different authors on ten topics ranging from type systems to proof-carrying code.

The first three chapters discuss various type systems – substructural type systems which control the use of variables in the type context and which lead to linear typing in one of the variants, dependent types where you are allowed to do calculations at the type level and which is being popularised by languages such as Idris, and a very interesting chapter on effect systems which discusses the region-based memory management work on Tofte and Talpin. In the latter work, the type of a program contains information about the allocation contexts which can allow the runtime to manage dynamic memory using a stack based approach.

The next two chapters cover the low level use of types. Typed assembly language would allow assembly language to be typed to check properties such as memory safety, and proof carrying code allows an executable to contain proof of its own safety and enough information for the safety to be checked by the target that is going to execute the code.

The next two chapters cover methods for reasoning about programs using their types, followed by a fantastic section on types for programming in the large. This section contains a chapter on the design considerations of the ML type system – this was very interesting. This section also contains a chapter on type definitions.

The last chapter is on type inference and discusses the typing of ML programs using constaint solving, rather than the usual unification based Hindley-Milner approach.

The book contains loads of interesting idea, but some of the theory is perhaps a little off-putting. If you really want to grips with the details, there are lots of exercises, or you can choose to simply skim these and still keep up with the exposition.

Posted in Uncategorized | Leave a comment

/dev/summer had some good talks as usual

It was /dev/summer at the weekend, where there were a few very interesting talks. Rather than going to the Clojure or Haskell tracks, this time I attended the double session on Go and a couple of sessions by  Gleb Bahmutov .

The Go session took a little while to get going, with the first 45 minutes hardly touching the language after some questions around the directory structure of a Go project on the disc. This meant that we didn’t get to see any large examples, thought the witty presentation and insightful answers to the questions led me to come away with basic Go knowledge that I can now try to put into practice.

The presentations by Gleb were full of good interesting ideas. The first presentation looked at the issues around npm modules and their dependencies (though the ideas applied more generally), and how semantic versioning isn’t typically supported by the modules. The clever idea of the presentation was a tool that tests semantic compatibility by running unit tests against the old and new versions of the module. Failures in the tests imply incompatibility and by having users send their results to a central server it is possible to check how safe a module upgrade is going to be. The second presentation looked at using code snippets inside Chrome to get various metrics on the page such as load time and render time, and also showed us how to combine this with the various profiling tools. Having not used these tools before, this was a great introduction to speeding up a web application.

I also attended the lightning talks which were a useful and informative set on general open source and browser testing.

It was a little strange to come across these talks. I’d just been reading about module dependencies in the .Net Nuget world after the publication of this post which talks about some changes to Nuget to make it support the new CoreClr world where the number of profiles is too great to simply name profiles using an integer value, and package authors will instead need to declare their dependencies. It also links to the bait and switch technique. I was also doing some interesting reading on the new WebAssembly technology and came across this great interview with Brendan Eich that explains things well.

Two other interesting blog posts. A talk on cloud scale event processing using Rx, which mentions some of the changes required to scale from the desktop version to a server based version – features needed included a way to checkpoint a query. This post on Rust’s type system is also interesting.

Posted in Uncategorized | Leave a comment