What do I need to know about Bitcoin?

There have been several bitcoin discussions at work of late… what actually is Bitcoin? How can the underlying blockchain technology be used for other purposes? I did the Coursera course on the topic some time ago, but these questions pushed me to go a little deeper into the world of BitCoin. I’d thought I’d try to summarise the articles I found useful here.

First, the book associated with the Coursera course has just been made available. I thought this book was a great read. It covers the basic cryptography that you need to understand, and gives some example crypto currencies which don’t quite work, before moving on to explain how the original bitcoin design fixed these issues. The course is very broad, with lectures discussing the motivation of the miners, who need to devote large amounts of computing power [and hence resources] to the network, in the hope of being the chosen node who generates the next accepted block of the transaction chain, and who can therefore gain the reward for the block including the associated transaction fees.

The course also covers the tricks of representation which can be used to allow the block chain to represent the ownership of various assets. I’d overlooked this aspect of the system when I first did the course, but the ability to associate bits of extra data with a transaction has led to extra uses for the block chain. For a small transaction fee that is paid to the miners, a user can record information, say a cryptographic hash of a document, into the block chain, and the permanent ledger property of the block chain can be used to prove that the document was published at a time in the past.

For an interesting idea of the history, I next read Nathaniel Popper’s Digital Gold: BitCoin and the Inside Story of the Misfits and the Millionaires Trying To Reinvent Money. This book covers the story, from the initial idea and paper published on a mailing list by Satoshi Nakamoto to the set of programmers who picked up on it and used their computers for the initial mining efforts. Eventually others invested in the idea, offering real world items in exchange for the virtual currency, and after some time there were investments in activities such as brokering firms who took the risk of converting  real currencies into virtual bitcoins. The book covers the history of Silk Road, the illegal drugs site,  which uses the anonymous nature of bitcoin to allow the trade of items such as drugs. Popper’s book is interesting – it narrates the history in detail [which gets a little tiresome at times] and also tries to explain how a virtual thing can actually have a real world value. The initial miners’ rewards for their mining efforts, including the blocks mined by the creator of bitcoin, are now worth considerable amounts of real world currency.

Obviously you need to know that your transactions are safe and there are loads of papers out there that analyse the safety of the currency. I enjoyed this Msc thesis which used Google App Engine to do various experiments. For an idea of the representational tricks that mean you can use the block chain for recording ownership of things such as cars, OpenAssets is one encoding you could take a look at.

Of course, you probably want to have a play to understand all of this. I started out trying to find where I could get some bitcoins. There are various faucets that occasionally give away free bitcoins, and many online bitcoin wallet management services, but I wasn’t sure if I really wanted to sign up. Fortunately there is a test version of the bitcoin block chain that is used for testing the various clients –  so you have the ability to do the transactions on a set of bitcoins that you can get for free from the Testnet faucet, though the coins have no actual value.

You’re also going to need a library to program against bitcoin, and  for this I selected NBitcoin, whose author has a free book on how to use it.  It is available as a NuGet package so you can easily use it from Visual Studio.

First we need to get a private/public key pair as an identity, from which we can get an address that we can type into the testnet faucet to get some bitcoins. We generate a new key and serialize it to a file for use later. We can then read the key back and generate a bitcoin address. We can type that into the faucet, and we’ll be notified of the pending transfer.

There are lots of websites that let you look up data from the block chain. I can use one to look at the coins owned by my key and look at the details of the transaction that populated it. We can see the process of block chain at work here. The transaction is initially unconfirmed as it makes its way across the peer to peer network until a miner picks it up and it becomes part of the block chain, though you need to wait until the block containing your transaction is several levels from the chain’s head before you can be confident that it will remain [typically 6 levels with one level every 10 minutes on the real block chain]

The NBitcoin library is powerful. It contains a lot of utilities to work with various block chain information sources and contains extensions for dealing with asset management via coloured coins. I quickly tested it out using some C# to transfer some bitcoins associated with key I had generated to another key. With bitcoin, one takes an output of a previous transaction, so I needed to fetch the transaction using the transaction id of the transfer from the blockr block chain information site. I split the output of the transaction that gave the money to me into three outputs: the majority gets transferred back to my account, a second small amount goes to the second account, and I add a little information to the transaction that becomes part of the block chain ledger and could be used to record asset transfer. Any remainng bitcoin is used as a fee for a miner, encouraging one of the miners to include the transaction in the block chain. We can see the results on one of the chain exploration sites.

In the C# code I needed to access the peer network to get my transaction to the miners. You can get a bitcoin client from this site. Running the bitcoin client for a while,

bitcoind -testnet

generated a list of peers in the peers.dat file which I could then use with the NBitcoin library to push my transaction out to the world. Alternatively the library can use a running local node, but I didn’t want to leave the node running so instead decided to use the peer data file. There’s lots of documentation here to discover how you can use the other utilities included in the download.

The block chain idea is fascinating – a distributed ledger with an immutable history, and there are many people trying to find uses for it. One example is the notion of side chains, which manage content but include a pointer to this data from the block chain by using a hashed signature which is included in the real block chain. There’s loads more experimenting to do, and I’m sure there are many interesting discoveries to come.


Posted in Uncategorized | Leave a comment

Your .NET applications can start being more native

I was interested in trying out the dotnet command line interface and seeing how it all works. Microsoft have after all just told us that they will be delaying the release of DNX to allow the integration into the CLI model. You can download a build of the comand line tools from here and it’s really easy to get going.
You can generate a demonstration “hello world” project using
dotnet new
You then get all of the packages it depends on using
dotnet restore
And then build it using
dotnet compile
Now it’s business as usual and you can run it as normal
cd bin\Debug\dnxcore50
dotnet2.exe [named as this because I was in a folder named dotnet2 when I created the project]
What excited me more about the command line tools though, is that they now have started offering the opportunity to compile .NET applications to native code. Be warned though that this is very early functionality and they only guarantee support for small hello world applications.
You can choose to compile using an ahead of time version of the normal JIT compiler [on x64] or can choose to go via generated C++ code. I wanted to see the kind of native code that can be produced and therefore chose the latter option.
dotnet compile –native –cpp
[Note that you have to be in a VS2015 x64 Native Tools command window to get the right tools available on the PATH]
This generates an executable in a native subdirectory of bin\debug\dnxcore50\native
which runs very quickly – there’s no jitting in order for the application to get running, and it is noticeably quicker on my fairly old laptop.
The demo application is a very simple hello world, and you can find the emitted c++ in the directory
I was interested in how the GC got linked into the project, particularly as I had heard of
CoreRT and couldn’t see any appropriately named dll when I attached windbg to the running executable.
I therefore modified the code to generate garbage [and built it in a folder named dotnet so ended up with an application named dotnet.exe]
public static void Main(string[] args)
Console.WriteLine(“Hello World!”);
for (int i =0; i < 100000; i++)
var x = new object();
The generated Main method takes the form
#line 8 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
void dotnet::ConsoleApplication::Program::Main(System_Private_CoreLib::System::String__Array* args){int32_t i=0; System_Private_CoreLib::System::Object* x=0; uint8_t _l2=0; _bb0: {
#line 8 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
#line 9 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
void* _1=__load_string_literal(“Hello World!”); System_Console::System::Console::WriteLine_13((System_Private_CoreLib::System::String*)_1);
#line 10 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
i=0; { goto _bb28; }; } _bb16: {
#line 11 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
#line 12 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
void* _8=__allocate_object(System_Private_CoreLib::System::Object::__getMethodTable()); System_Private_CoreLib::System::Object::_ctor((System_Private_CoreLib::System::Object*)_8); x=(System_Private_CoreLib::System::Object*)_8;
#line 13 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
#line 10 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
int32_t _10=i; int32_t _11=_10+1; i=_11; } _bb28: {
#line 10 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
int32_t _3=i; int32_t _4=_3<100000; _l2=_4; int32_t _6=_l2; if (_6!=0) { goto _bb16; }; } _bb40: {
#line 14 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
void* _7=System_Console::System::Console::ReadLine();
#line 15 “C:\\Users\\clive.tong\\Desktop\\dotnet\\Program.cs”
return; } }
Unfortunately, running the application with a debugger attached I got an access violation
00007ff6`c2a4a460 4d8b01          mov     r8,qword ptr [r9] ds:0000003c`00032000=????????????????
0:000> k
# Child-SP          RetAddr           Call Site
00 0000003c`7715f3c0 00007ff6`c2a4ae60 dotnet!WKS::gc_heap::mark_object_simple1+0x180
01 0000003c`7715f430 00007ff6`c2a2e850 dotnet!WKS::gc_heap::mark_object_simple+0x1e0
02 0000003c`7715f480 00007ff6`c2a283be dotnet!WKS::GCHeap::Promote+0x90
03 0000003c`7715f4b0 00007ff6`c2a21aa9 dotnet!GcBulkEnumObjects+0x2e
04 0000003c`7715f4e0 00007ff6`c2a14c6c dotnet!Module::EnumStaticGCRefs+0x69
05 0000003c`7715f540 00007ff6`c2a4b233 dotnet!RuntimeInstance::EnumAllStaticGCRefs+0x6c
06 0000003c`7715f5a0 00007ff6`c2a43606 dotnet!WKS::gc_heap::mark_phase+0x193
07 0000003c`7715f630 00007ff6`c2a432e3 dotnet!WKS::gc_heap::gc1+0xd6
08 0000003c`7715f690 00007ff6`c2a2d823 dotnet!WKS::gc_heap::garbage_collect+0x753
09 0000003c`7715f6f0 00007ff6`c2a5a629 dotnet!WKS::GCHeap::GarbageCollectGeneration+0x303
0a 0000003c`7715f740 00007ff6`c2a2c4ee dotnet!WKS::gc_heap::try_allocate_more_space+0x1b9
0b 0000003c`7715f780 00007ff6`c2a194ac dotnet!WKS::GCHeap::Alloc+0x5e
0c 0000003c`7715f7b0 00007ff6`c29d9b53 dotnet!RhpNewFast+0x5c
0d 0000003c`7715f7e0 00007ff6`c29d3132 dotnet!System_Private_CoreLib::System::Runtime::InternalCalls::RhpNewFast+0x13 [c:\users\clive.tong\desktop\dotnet\obj\debug\dnxcore50\native\dotnet.cpp @ 39893]
0e 0000003c`7715f810 00007ff6`c29ee993 dotnet!System_Private_CoreLib::System::Runtime::RuntimeExports::RhNewObject+0x92 [c:\users\clive.tong\desktop\dotnet\obj\debug\dnxcore50\native\dotnet.cpp @ 37660]
0f 0000003c`7715f890 00007ff6`c29dc783 dotnet!RhNewObject+0x13 [c:\users\clive.tong\desktop\dotnet\obj\debug\dnxcore50\native\dotnet.cpp @ 37666]
10 0000003c`7715f8c0 00007ff6`c29d26fe dotnet!dotnet::ConsoleApplication::Program::Main+0x53 [c:\users\clive.tong\desktop\dotnet\program.cs @ 12]
11 0000003c`7715f930 00007ff6`c29ef1ea dotnet!dotnet::_Module_::StartupCodeMain+0x4e [c:\users\clive.tong\desktop\dotnet\obj\debug\dnxcore50\native\dotnet.cpp @ 37467]
12 0000003c`7715f980 00007ff6`c2a62718 dotnet!main+0x4a [c:\users\clive.tong\desktop\dotnet\program.cs @ 5676]
13 (Inline Function) ——–`——– dotnet!invoke_main+0x22 [f:\dd\vctools\crt\vcstartup\src\startup\exe_common.inl @ 74]
14 0000003c`7715f9d0 00007ff9`d5872d92 dotnet!__scrt_common_main_seh+0x124 [f:\dd\vctools\crt\vcstartup\src\startup\exe_common.inl @ 264]
15 0000003c`7715fa10 00007ff9`d5b39f64 KERNEL32!BaseThreadInitThunk+0x22
16 0000003c`7715fa40 00000000`00000000 ntdll!RtlUserThreadStart+0x34
Notice that the source locations show that many of the frames are compiled versions of the C++ code that we found in the obj directory, but the other parts of the runtime are linked in and there is no source location.
Running the command line with the -v option, you can see the files that are passed into the cpp compiler, and you can see that lots of them come from the dotnet sdk directory
Running C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\..\..\VC\bin\amd64\link.exe “/NOLOGO” “/DEBUG” “/MANIFEST:NO” “/IGNORE:4099” “/out:C:\Users\clive.tong\Desktop\dotnet\bin\Debug\dnxcore50\native\dotnet.exe” “kernel32.lib” “user32.lib” “gdi32.lib” “winspool.lib” “comdlg32.lib” “advapi32.lib” “shell32.lib” “ole32.lib” “oleaut32.lib” “uuid.lib” “odbc32.lib” “odbccp32.lib” “C:\Program Files\dotnet\bin\sdk\PortableRuntime.lib” “C:\Program Files\dotnet\bin\sdk\bootstrappercpp.lib” “/MACHINE:x64” “C:\Users\clive.tong\Desktop\dotnet\obj\Debug\dnxcore50\native\dotnet.obj”
The PortableRuntime here is the actual CoreRT code. If you clone the CoreRT project and build using the build.cmd script, you can indeed make your own version of this SDK.
It includes code such as a garbage collector and a runtime that is fairly sophisticated, containing such features as thread hijacking which is implemented in this file . There are loads of interesting comments if you browse the code.
It will be interesting to see how this portable runtime turns out… it’s certainly true that no jitting makes start up time much better, but it isn’t clear to me how features like Reflection are going to be supported. It will also be interesting to see how the debugging experience works when debugging AOT compiled code.
While we are on the subject of .NET, I’d like to recommend these talks from NDC London. A talk about the new CLI and a talk on the history of ASP.NET which goes into some detail about how things have changed over the years. There is also this talk on the implementation of SignalR from several years ago.
Posted in Uncategorized | Leave a comment

Another good DevWinter

It was DevWinter last weekend, and as usual there was an interesting set of talks.

The “my adventure with elm” talk was good. The speaker gave a very brief introduction to Reactive programming, a brief introduction to the Elm language and then implemented the Snake-and-apples game in Elm inside a browser on one of the Try Elm sites. Apart from a couple of times when he needed to uncomment a couple of pre-written functions, he wrote the whole application in front of us. This was a good introduction to the language and a very well presented talk.

The “anatomy of a catastrophic performance problem” was also very good. A witty presentation of a real life performance problem, which showed how frequently us developers think we’ve analysed the problem and hence push out a fix, without trying the fix on a reproduction [which wasn’t available in this case].

In the afternoon, I attended the double session on ClojureScript. This wasn’t very hands on in the end, and the presenter spent a lot of time discussing the Clojure language and the various benefits of functional styles of programming as well as selling the advantages of transpiling to JavaScript rather than writing in JavaScript in the first place. The presenter did use the lein Figwheel plugin to get a REPL connected to a ClojureScript session running inside a browser which also reloads when modifications are made to the source. This is all build using the lein tool, and getting started is as simple as typing:

lein new figwheel myproject
cd myproject
lein figwheel
… wait for compilation to happen
Open browser on port http://localhost:3449
… and the REPL will connect to the browser
(js/alert “Hello”)
… and the alert dialog is raised inside the browser

If you then open the developer console in your browser and make an edit to any of the project’s ClojureScript files, you will see the modified code reloaded into the browser session allowing quick development turnaround.

The best talk of day was “my first unikernel with mirage os” by Matthew Grey. This was a hands on session based on the code in the speaker’s GitHub repository.  I’d been meaning to play with Mirage for some time, as it is a perfect match with some of my interests – operating systems written in functional languages [which I first read about when I spent a year working in industry before university where I spent some free time playing with LispKit lisp] and hypervisors. The idea is that you can take an application written in OCaml and produce a Unikernel from it. The Mirage team have made it possible to deliver a UniKernel, that runs on top of Xen, very easily from an OCaml application. The Mirage team have also implemented various utilities such as a TLS library and web server that your application can use. Matt Gray’s repository contains a vagrant script that can be used to get a Ubuntu development environment that is suitable for playing around with Mirage. Once you have this running inside VirtualBox, it is easy to get the various examples running.

The speaker gave a brief overview of Unikernels and then helped the audience to get going. There was another talk on Mirage in the afternoon, but I didn’t attend that.

What did I enjoy about DevWinter? The range of the talks, on all kinds of topics. Unlike the typical Microsoft event I go to, the talks cover a range of topics that are interesting and are aimed at what might happen in the future. I also very much enjoyed the developer experience talks. A very nice venue makes this a great way to spend a Saturday twice a year.

Posted in Uncategorized | Leave a comment

Actors have become popular again

Reactive messaging patterns with the Actor model: Applications and Integration with Scala and Akka by Vaughn Vernon

I very much enjoyed this book’s discussion of Actors and the reasons why the Actor model is well matched with modern applications, though I enjoyed the book less when it got into the various patterns.

Chapter one talks about the design of modern Reactive applications and then discusses the origin of the Actor model. It’s vey good. Chapter two gives a brief introduction to the Scala programming language and the Akka library… actor systems, supervision, remoting and clustering are all explained really well. Chapter three discusses the need for scalability with a good discussion of how clock speed is no longer increasing but instead we are given many more cores for our applications to use. It is hard to write multi-threaded code, and the Actor model provides a simple model where we don’t need to worry about memory barriers and locking – though the issues associated with locking including deadlock and live lock can still manifest themselves in an application at higher levels.

From Chapter four onwards, the author lists a large number of patterns for using actors. There’s a discussion about each of the patterns and an example written using the Akka test framework that demonstrates it. Patterns such as Message Router, Publish Subscribe Channel, Scatter-Gather and Service Activator. I must admit that I found it hard work to go through each of the patterns, read the discussion and then understand some of the multipage examples… I will probably go back to some of the examples when I get more time.

I did learn a lot along the way. For example, I was interested how a modern application’s need for guaranteed message processing would map to actors and their message delivered “at least once” guarantees. This is covered by the Guaranteed Delivery pattern which uses the Akka persistence and AtLeastOnce traits to store messages that haven’t been acknowledged and which hooks the Actor restore protocol to ensure that nothing gets lost if actors are restarted.

After reading this book, I went and read one of the early Phd theses on the subject – Gul Agha –Actors: A Model of Concurrent Computation in Distributed Systems. There are versions of this available for download if you Google. It’s interesting to read how the Actor model was pushed as a model for computation, with lots of effort in the thesis to give the model a rigorous semantics, and also interesting to see the emphasis on the become operation which allows the Actor to change the message processing function. This feature is not something that gets pushed in the modern interpretation of the model.

There is an open source project to make the Akka framework on the .NET platform.

There is also Pony, a language and runtime with Actors at the very core. I arrived at it via this interesting talk. There is a lot of interesting technology associated with the language – such as a type system that makes it possible to prevent aliasing of message data, as well as interesting features of the runtime such as the way it detects unreferenced actors.

Posted in Uncategorized | Leave a comment

Sometimes it’s not good to build on other libraries

The implementation of async/await in C# 7 is very complicated, and there are a number of places where the use of code generation and the use of underlying libraries shines through to the implementation. I did a lightning talk at work about this subject, which is available here.

In the talk, I mentioned that it is a shame that there isn’t metadata for items such as lambda expressions, requiring tools to infer that classes are the manifestation of lambda expressions by looking for patterns. It was interesting to see Joe Duffy mention that the encoding of lambda expressions as instances of compiler generated classes leads to some issues when trying to make compiled coded fast in another of his excellent posts on Midori.

There’s a good talk on the design process of C# 7 by Lucian Wischik here and the ASP.NET Fall Sessions published on Channel 9 have some good talks on the future of the .NET platform including the command line tools and the future of packages. It was also good to have a talk on where the Kestrel web server fits into the picture.

Posted in Uncategorized | Leave a comment

Some good reads on Angular 1.x

I decided it was time to learn one of the many JavaScript SPA frameworks, and figured that it would be a good idea to have a look at Angular.

Of course there have been many posts in the past detailing problems in Angular such as this one, this one and this one. There are also a number of articles discussing the good parts of the framework such as this one and this one. There is also the rewrite as Angular 2.0 going on which seems to progressing well with its emphasis on TypeScript and ES6.

I haven’t had time to write anything large in this framework, but have been impressed with the design of the framework and the clever ideas that it incorporates. For me, it was also a chance to get to grips with modern JavaScript development using npm and the associated tools.

The first book I read was AngularJS Up & Running: Enhanced productivity With Structured Web Apps by Shyam Seshadri & Brad Green.

This book takes you through the facilities that the Angular framework offers, and is filled with examples that can be downloaded from one of the author’s GitHub repository. It starts with a quick introduction to Angular and writes a basic AngularJS Hello World. Chapter 2 goes into directives and controllers, concentrating on an app that displays a collection of data items using databinding and the ng-repeat directive. Chapter 3 covers unit testing using Karma and Jasmine. Angular, with its inbuilt dependency injection makes it easy to do unit testing of your controllers. Chapter 4 touches on Forms and Inputs, and then moves on to the subject of services which are covered in more detail in chapter 5, where the authors discuss the differences between services and controllers.

Chapter 6 covers HTTP communication with the server and Chapter 7 discusses unit testing the server calls that your application is making. Chapter 8 covers filters and Chapter 9 covers how you unit test them. Chapter 10 discusses the ngRoute module, and how it helps you implement history and SEO for your application. Chapter 11 goes into directives in more details and Chapter 12 covers how you unit test them.

Chapter 13 covers directives in more details, and covers the Angular life cycle, such as the digest cycle, in more detail. This is followed by Chapter 14 that covers end-to-end testing using Protractor, and the final chapter gives some guidelines and best practices.

I thought this book was a good introduction, and it was good having the examples to play with in a browser on my laptop. I enjoyed the details about how the framework worked at the low level, and it was good that the authors demonstrated how you might convert a downloaded JavaScript component, a slider, into an Angular component.

While I was trying to get a deeper understanding of the Angular framework, I came across this sample chapter from the book  Build Your Own AngularJS by Tero Parviainen. The chapter seemed to explain some of the details that I needed to understand Angular a bit better, so I bought the whole book. I’m very glad I did. The book is great from the point of view of understanding Angular, but also as a means of getting into JavaScript development. The author develops a variant of the Angular framework in a test driven fashion using npm and its associated libraries.

You get over a thousand pages of detailed JavaScript development which works through most of the features of Angular, starting with Scopes, which were covered in the sample chapter I linked to earlier, and moving all the way up to directives. The whole book is thoroughly interesting and the author explains the framework very well. Moreover, you can work through the examples too by downloading the code from the author’s GitHub repository.

There is really nothing like implementing something to get an understanding of how it all works. The book covers the whole dependency injection framework that underlies Angular in great detail, and mentions features like decorators that were only sparsely covered in the higher level book. The whole dependency injection idea makes the framework very customisable, and it’s great that the whole system is built upon this framework with all of the system components being made available by dependency resolution. Moreover, the Angular implementation of promises was much more understandable when we got into the low level implementation details – particularly promise rejection, and the underlying implementation of deferreds. I can’t recommend this book enough.

There are also a number of good Angular podcasts around including Angular Air and Adventures in Angular. And for a very brief introduction there’s Dan Wahlin’s AngularJS in 20 minutes.

Posted in Uncategorized | Leave a comment

Some interesting .NET videos

Channel 9 have just published a great video with Mads Torgersen. In it he covers some of the features of C# 6, mentioning the new FormattableString class to which an interpolated string expression can be cast to in order to access additional functionality. Most of the video is about some potential C# 7 features, which include pattern matching and valuetype tuples  whose type includes the name of the item in the tuple. There will be work on improving the performance of structs by allowing the return of structures by reference. There is also talk of more research into reference types that do not include the null value. Torgersen also touches on the expression problem when discussing the emphasis on including more functional features into the language.

On a related.NET note, there’s this recording of David Fowler talking about the internals of the new ASP.NET. He follows a request as it makes it way through the new platform, including a dive into the internals of the new Kestrel web server, which uses libuv to allow the same code to work on Windows and Linux. The leastprivilege blog had a recent post that looks at authorisation in this new framework.

Its also interesting that people are starting to submit pull requests for language and CLR changes – examples include pull requests like this and this.

Posted in Uncategorized | Leave a comment