36
votes

Pure untyped lambda calculus is a powerful concept. However, building a machine or interpreter for real-world use is often described as (close to) impossible. I want to investigate this. Is it theoretically possible to build a comparatively fast untyped lambda calculus machine?

By comparatively fast I generally mean comparable to modern Turing-like architectures for a similar range of tasks, within a similar amount of resources (gates, operations, physical space, power use, etc).

I place no limitations on the implementation and architectural layers of the machine, except that it must be physically and somewhat realistically realizeable in some way. No restrictions on how to handle IO either.

  • If possible, what are the main challenges?
  • If impossible, why and how?
  • What is the state of research in this area?
  • Which fields and subjects are most relevant?

How much is known about the feasibility of a computer architecture based around lambda calculus?

Questions covering similar ground:

1
+1 for an interesting question, though you might get a better answer at cstheory.stackexchange.comGabe Moothart
Hm, I didn't think of that. I'll add links to the related questions on cstheory above for now. Moderators feel free to move the question if that's possible.Jostein

1 Answers

21
votes

First, it is possible to compile the lambda calculus efficiently to machine code even on existing architectures. After all, scheme is the lambda calculus plus a bit extra, and it can be compiled efficiently. However, scheme & co are the lambda calculus under strict evaluation. It is also possible to compile the lambda calculus under non-strict evaluation efficiently! On this, see SPJ's two books for some background: http://research.microsoft.com/en-us/um/people/simonpj/papers/papers.html

On the other hand, it is also true that if we built hardware designed for functional languages, we could compile code to that hardware and do very well indeed. The best new stuff on this I know of is the Reduceron: http://www.cs.york.ac.uk/fp/reduceron/

The key to the performance of the Reduceron, which is quite compelling, is that it is built around parallel graph reduction, and aims to exploit the opportunities for parallelism made explicit in the reduction of lambda calculus equations.