A.M. Turing award lectures

I’ve just watched two lectures given by laureates of ACM Turing awards. First lecture was given by Barbara Liskov in 2009 and the second lecture was given by Chuck Tucker in 2010. Both lectures contain many interesting topics, and I do not want to merely summarize them as it would be disservice to presenters. Instead I’ll just focus on one aspect of computer science present in both.

They both talk about past experiences of developing computer science. Liskov describes how she was involved in implementation of CLU programming language. She describes how the situation looked in 1960s and 1970s regarding programming languages. There were many different programming languages and they offered different choices. For example there were many approaches to exception handling. One approach was termination (known today) and another was resurrection; after handling exception code could order returning to procedure which caused exception. One could also use FAILURE exceptions, which were something similar to today’s Java runtime exceptions but one could change any exception into failure, putting original exception as argument of failure (something similar to today’s exception wrapping). There was also special Guardian module which was responsible for catching uncatched exception which seems similar approach known from Virtual Machines but each unit (module) could have its own Guardian so exceptions were confined inside modules. She describes implementation of iterators which seem similar to Python’s generators with yield; even the way of implementing of iterators seems similar to how generators were implemented in Python. First there were just generators (PEP 255), and then they were extended to allow for coroutines (PEP 342). Liskov stopped before implementing coroutines. Python is going even further with using subgenerators (PEP 380) and allowing for using generators with asynchronous programming (currently discussed PEP 3156). Liskov said that CLU was “way ahead of its time” – and it is true. Only today we can see implementation of its concepts.

Another concept described by Liskov which now is implemented in current programming language is collections with WHERE. It is similar to templates e.g. from C#, with restrictions posed on parameters. In CLU parameter had to implement some methods (concept similar to duck typing), in C# one requires that argument implements some interface. It feels strange to see concept from 1970s re-discovered and implemented only in 2006.

But it gets more clear after watching Tucker’s lecture. He notes that “Computer science is very forgetful discipline”. Tucker talks about all the walls we are facing today – memory wall, power wall, complexity wall. His entire lecture is about history and its influence today. We (computer science people) made some choices back then and now we live with those choices. This can be seen when looking for example at BIOS – only now we can see migration to UEFI, but not without many problems (see Matthew Garret work ). Tucker uses interrupts as example of such legacy of the past limitation. Interrupts made sense in single-core system but now complicate things and make no sense in multi-core. Also this can be seen when looking at computer languages today – I do not know language which implements resurrection exceptions mentioned by Liskov. Tucker wonders how would we make those choices today, given current knowledge and technology.

Many choices Tucker mentions were made as the result of scarcity. He mentions problems with shared memory and message passing. Shared memory was easier to implement so it was the probable reason that it was chosen over messaging – but now it poses problems with coherency on multi-core chips. Well known example is virtual memory which was the result of small amount of RAM and necessity to spill this RAM into disk. Now we have much memory so swap is not needed (e.g. Android does not use swap; on the other hand Nokia n900 had swap implemented on the flash). Needing to virtualise memory to be able to find large continuous RAM chunk can be solved by Garbage Collector… So in theory we could resign from soma parts of current memory layout in the hardware and operating systems. I do not agree with Tucker that we should also resign from protection given by virtual memory; he mentions that this should be solved by using safe languages (i.e. not C) but we also need to deal with rogue programs and multi-user environment – so I think having protection offered by Virtual Memory is a Good Thing(TM).

Another problem which we experience only today is related to threads and locking. In the past systems were not large enough to show problems with locking – i.e. problems with composing them and lack of ability to compose smaller systems. Tucker does not believe in Transaction Memory. He does not like transactions because they are speculative and we do not have many experience with transactions. We use them in databases, but not in smaller scale with multiple threads on CPU level.

Tucker notes that 1950s and 60s were age of experimentation, then 70s and 80s were the period of consolidation and warfare (only few of the existing solutions left – he was talking about CPUs but the same is true for programming languages) and 90s were Instruction Level Parallelism. But there is not much ILP in most of the programs which can be done automatically. Looking at my experience with programming GPGPU one needs to work on program to get performance gains, and not much can be done just by the compiler, without programmer intervention. Also, there is not many applications which can use many cores; one does not need dozens of cores to watch video or write email.

The problem is that we learned to live with those limitations and many of those work well enough and cost of changing to something better is too high. We might need to rethink all those decisions; not necessarily change them – but decide again, in today’s situation and with current knowledge.

Problem with changing such widely used solutions as interrupts is that it is costly now and will bring profits in the long term. Need for many players to agree does not help. At the same time this might be good field for experimentation using virtual machines or various free software solutions. One can experiment with existing code on new architectures – just recompile programs or implement virtual machine on new architecture. There is also question how much control to give programmers. Too much and programs will be hard to port. Too little and they will have problems with getting performance. For example new Exynos 5 CPU offers 4 fast cores and 4 slow cores; system chooses which cores to use depending on the load. But what if I, the developer, want to use 1 fast core and 1 slow core, or some other combination? I’ll not have this ability from Android level – Dalvik operates on higher level than that.

I agree with Tucker that we live in very interesting times for Computer Science. It looks like we need to rethink some of the basics of systems – and maybe repeat such a process every time we have new hardware (for example GPGPU, heterogeneous chips, and so on). The problem is that we, as the discipline, have forgotten many of possibilities discovered and abandoned in the past – and those possibilities might be more relevant today. Python’s example however shows that good ideas survive and gets implemented, even if it takes many years.

Advertisements