I had read the first version of this book several years ago, but the second edition was far more interesting. The book gives the high level view of remoting with tips on using the technology in real world scenarios, together with a second half of the book which plumbs into the low level details, showing how a client side call on a remoted object passes through the transparent proxy, gets the converted into an IMessage and then details how this passes through the stack of message sinks, formatters and channel sinks to arrive at the server. At the server it then goes into detail about how the call is dispatched and the results are marshalled back to the client.
There are many examples of how objects can be added to this pipeline to customise things. Examples include compressing the data stream, encrypting the data stream and responding to the mime type of the result, together with an example of developing a transport using SMTP. There is also a whole section on versioning which is a useful read for anyone who is going to deploy this technology for real.
My only quibble was the quote in the book "Both CORBA and DCOM have employed distributed reference counting". This is plain wrong. CORBA never had distributed reference counting according to the standard. Some of the language binding such as the C++ binding used local reference counting to clear up the client side stubs, but destruction of objects needed to be an explicit operation on the object itself or the object adaptor that was responsible for managing it.
Digging into the remoting architecture reminded me of the CORBA work that I did in the late 90’s. I worked hard at implementing a Common Lisp ORB for the Lisp vendor for whom I worked at the time. We collaborated with Franz on the OMG Common Lisp Language mapping. The actual means of getting messages across to the server was very much like the process that happens inside .NET. A request package was generated that contained details of the target object together with the name of the operation to be invoked and values for the parameters. This request was submitted to another object which marshalled the request package and sent it across tcp to the target server. When the result came back, the results were placed into the initial request package as a response. As reflection wasn’t generally available, the link between the initial function call and the generation of the request package was via a stub that was generated by compiling an IDL definition to generate functions that would do the necessary construction. The sink chain had an equivalent in that interceptors could be set up to run either on the request/response packet or on the marshalled data stream before the client sent the data or after it received it. Likewise for the server. The mechanism for finding the target object was by looking down a tree of object adapters; an object reference essentially gave a search path through this tree. This was neat because the non-root adapters could be started lazily. The adapters were responsible for generating the servant – the object that would handle the request, and were responsible for deleting it, either under the control of an API call from a client or by some other process. There was nothing builtin in do the lease sponsoring of .NET or the distributed GC of DCOM.
The .NET scheme is certainly more flexible that the CORBA scheme in terms of the message passing, and it offers the possibility of working across more transport types. All very interesting.