Yes, you can define numbers (and indeed, arbitrary data types) inside the lambda calculus. Here's the idea.
First, let's pick what numbers we're going to define. The simplest numbers to work with are the natural numbers: 0, 1, 2, 3, and so on. How do we define these? The usual approach is to use the Peano axioms:
- 0 is a natural number.
- If n is a natural number, then Sn is a natural number.
Here, S denotes the successor of n, or n+1. Thus the first few Peano natural numbers are 0, S0, SS0, SSS0, and so on – it's a unary representation.
Now, in the lambda calculus, we can represent function application, so we can represent Sn, but we don't know how to represent 0 and S themselves. But luckily, the lambda calculus offers us a way of deferring that choice: we can take them as arguments, and let someone else decide! Let's write z for the 0 we're given, and s for the S we're given. Then we can represent the first few numbers as follows, writing ⟦n⟧ for "the lambda calculus representation of the natural number n":
- ⟦0⟧ = λ z s . z
- ⟦1⟧ = λ z s . s z
- ⟦2⟧ = λ z s . s (s z)
- ⟦3⟧ = λ z s . s (s (s z))
Just as a natural number n is n applications of S to 0, a lambda-calculus representation of n is an application of n copies of any successor function s to any zero z. We can define successor, too:
- ⟦0⟧ = λ z s . z
- ⟦S⟧ = λ n . λ z s . s (n z s)
Here, we see that the successor applies one extra copy of s to n, after making sure n uses the same z and s. We can see that this gives us the same representations, using ⇝ for "evaluates to":
- ⟦0⟧ = λ z s . z
- ⟦1⟧ = ⟦S0⟧
= (λ n . λ z s . s (n z s))(λ z′ s′ . z′)
⇝ λ z s . s ((λ z′ s′ . z′) z s)
⇝ λ z s . s z
- ⟦2⟧ = ⟦SS0⟧
= (λ n . λ z s . s (n z s))((λ n′ . λ z′ s′ . s′ (n′ z′ s′))(λ z″ s″ . z″))
⇝ (λ n . λ z s . s (n z s))(λ z′ s′ . s′ ((λ z″ s″ . z″) z′ s′))
⇝ (λ n . λ z s . s (n z s))(λ z′ s′ . s′ z′)
⇝ λ z s . s ((λ z′ s′ . s′ z′) z s)
⇝ λ z s . s (s z)
- ⟦3⟧ = ⟦SSS0⟧
= (λ n . λ z s . s (n z s))((λ n′ . λ z′ s′ . s′ (n′ z′ s′))((λ n″ . λ z″ s″ . s″ (n″ z″ s″))(λ z‴ s‴ . z‴)))
⇝ (λ n . λ z s . s (n z s))((λ n′ . λ z′ s′ . s′ (n′ z′ s′))(λ z″ s″ . s″ ((λ z‴ s‴ . z‴) z″ s″)))
⇝ (λ n . λ z s . s (n z s))((λ n′ . λ z′ s′ . s′ (n′ z′ s′))(λ z″ s″ . s″ z″))
⇝ (λ n . λ z s . s (n z s))(λ z′ s′ . s′ ((λ z″ s″ . s″ z″) z′ s′))
⇝ (λ n . λ z s . s (n z s))(λ z′ s′ . s′ (s′ z′))
⇝ λ z s . s ((λ z′ s′ . s′ (s′ z′)) z s)
⇝ λ z s . s (s (s z))
(Yes, that gets dense and hard to read quickly. Working through it is a pretty good exercise if you feel like you need more practice – it led to me catching an error in what I'd originally written!)
Now, we've defined 0 and S, so that's a good start, but we want a principle of induction, too. That's what makes the natural numbers what they are, after all! So, how will that work? Well, it turns out we're basically set. When thinking about our principle of induction programmatically, we want a function that takes as input a base case and an inductive case, and produces a function from natural numbers to some sort of output. I'll call the output "a proof for n". Then our inputs should be:
- A base case, which is our proof for 0.
- An inductive case, which is a function that takes as input a proof for n and produces a proof for Sn.
In other words, we need some sort of zero value, and some sort of successor function. But these are just our z and s arguments! So it turns out that we're representing natural numbers as their induction principle, which I think is pretty cool.
And this means we can define basic operations. I'll just define addition here, and leave the rest as an exercise. In our inductive formulation, we can define addition as follows:
- m + 0 = m
- m + Sn = S(m + n)
This is inductively defined on the second argument. So how do we translate this? It becomes:
- ⟦+⟧ = λ m n . λ z s . n (m z s) s
Where does this come from? Well, we're applying our inductive principle to n. In the base case, we return m (using the ambient z and s), just as above. In the inductive case, we apply a successor (the ambient s) to what we get out. So this must be right!
Another way to look at this is that, since n z s applies n copies of s to z, we have that n (m z s) s applies n copies of s to m z s, for a total of n + m copies of s to z. Again, that's the right answer!
(If you're still not convinced, I encourage you to work out a small example like ⟦1+2⟧; that should be small enough to be tractable but large enough to be at least somewhat interesting.)
So now we see how to define addition for natural numbers inside the pure untyped lambda calculus. Here are some additional thoughts for further reading, if you like; these are more condensed and less explained.
This representation technique is more generally applicable; it's not just for natural numbers. It's called Church encoding, and can be adapted to represent arbitrary algebraic data types. Just as we represented the natural numbers by their induction principle, we represent all data types by their structural recursion principle (their fold): the representation of a data type is a function that takes one argument for each constructor, and then applies that "constructor" to all the necessary arguments. So:
- Booleans:
- ⟦False⟧ = λ f t . f
- ⟦True⟧ = λ f t . t
- Tuples:
- Sum types (
data Either a b = Left a | Right b
):
- ⟦Left x⟧ = λ l r . l x
- ⟦Right y⟧ = λ l r . r y
- Lists (
data List a = Nil | Cons a (List a)
):
- ⟦Nil⟧ = λ n c . n
- ⟦Cons x l⟧ = λ n c . c x l
Note that in the last case, l will be itself an encoded list.
This technique also works in a typed setting, where we can talk about folds (or catamorphisms) for data types. (I mostly mention this because I personally think it's really cool.) data Nat = Z | S Nat
is then isomorphic to forall a. a -> (a -> a) -> a
, and lists of e
s are isomorphic to forall a. e -> (e -> a -> a) -> a
, which is just part of the type signature of the common foldr :: (e -> a -> a) -> a -> [e] -> a
. The universally quantified a
s represent the type of the natural number or list itself; they need to be universally quantified, so passing these around requires higher-rank types. The isomorphism is witnessed by the fact that foldr Cons Nil
is the identity function; for a natural number encoded as n
, we similarly have n Z S
recovering our original list.
If you're bothered by the fact that we only used natural numbers, we can define a representation for integers; for instance, a common unary-style representation is
data Int = NonNeg Nat | NegSucc Nat
Here, NonNeg n
represents n
, and NegSucc n
represents -(n+1)
; the extra +1
in the negative case ensures that there's a unique 0
. It should be straightforward to convince yourself that you could, if you so desired, implement the various arithmetic functions on Int
in a programming language with real data types; these functions could then be encoded in the untyped lambda calculus via Church encoding, and so we're set. Fractions are also representable as pairs, although I don't know of a representation that ensures all fractions are uniquely representable. Representing real numbers gets tricky, but IEEE 754 floating point numbers can be represented as 32-, 64-, or 128-tuples of booleans, which is horribly inefficient and bulky, but encodable.
More efficient representations of natural numbers (and integers, and so on) are also available; for example,
data Pos = One
| Twice Pos
| TwiceSucc Pos
encodes positive binary numbers (Twice n
is 2*n
, or adding a 0
to the end; TwiceSucc
is 2*n + 1
, or adding a 1
to the end; the base case is One
, a single 1
). Encoding the natural numbers is then as simple as
data Nat = Zero | PosNat Pos
but then our functions, such as addition, get more complicated (but faster).