So many architectures, so little time

I’ve been reading a large amount of compiler related papers lately.

First, a slide show on the optimisation of Haskell programs, where the author highlights the use of strictness analysis to detect the parts of the program which can be executed eagerly, and hence without the need to allocate numerous thunks. The compiler can use unpacking to avoid allocation of intermediate boxed data items on the heap, which also improves the performance.

Second a blog post that discusses the LLVM compiler and how it maps an intermediate representation of an instruction into a target instruction on the target architecture. This very much reminds me of the compiler architecture we had in my first job. The compiler targeted an instruction set known as HARP (Harlequin Abstract RISC Processor), and then HARP instructions were translated into machine instructions using a template matching scheme. HARP had an infinite set of registers, and the register colouring happened as part of this templating processing. (A paper describing on of the uses of HARP, in the Chameleon project which focussed on dynamic process migration can be found here).

There’s a good document here on the x64 calling convention and architecture. Useful for debugging problems in x64 jitted CLR code.

Finally, in the theme of performance, GPU architecture is becoming an important way of generating high performance solutions for certain types of algorithm. A general introduction to GPUs can be found here and document on using CUDA to program such architectures can be found here.

This entry was posted in Computers and Internet. Bookmark the permalink.

Leave a comment