Pure untyped lambda calculus is a powerful concept. However, building a machine or interpreter for real-world use is often described as (close to) impossible. I want to investigate this. Is it theoretically possible to build a comparatively fast untyped lambda calculus machine?
By comparatively fast I generally mean comparable to modern Turing-like architectures for a similar range of tasks, within a similar amount of resources (gates, operations, physical space, power use, etc).
I place no limitations on the implementation and architectural layers of the machine, except that it must be physically and somewhat realistically realizeable in some way. No restrictions on how to handle IO either.
- If possible, what are the main challenges?
- If impossible, why and how?
- What is the state of research in this area?
- Which fields and subjects are most relevant?
How much is known about the feasibility of a computer architecture based around lambda calculus?
Questions covering similar ground: