In the Idris Effects library effects are represented as
||| This type is parameterised by:
||| + The return type of the computation.
||| + The input resource.
||| + The computation to run on the resource given the return value.
Effect : Type
Effect = (x : Type) -> Type -> (x -> Type) -> Type
If we allow resources to be values and swap the first two arguments, we get (the rest of the code is in Agda)
Effect : Set -> Set
Effect R = R -> (A : Set) -> (A -> R) -> Set
Having some basic type-context-membership machinery
data Type : Set where
nat : Type
_⇒_ : Type -> Type -> Type
data Con : Set where
ε : Con
_▻_ : Con -> Type -> Con
data _∈_ σ : Con -> Set where
vz : ∀ {Γ} -> σ ∈ Γ ▻ σ
vs_ : ∀ {Γ τ} -> σ ∈ Γ -> σ ∈ Γ ▻ τ
we can encode lambda terms constructors as follows:
app-arg : Bool -> Type -> Type -> Type
app-arg b σ τ = if b then σ ⇒ τ else σ
data TermE : Effect (Con × Type) where
Var : ∀ {Γ σ } -> σ ∈ Γ -> TermE (Γ , σ ) ⊥ λ()
Lam : ∀ {Γ σ τ} -> TermE (Γ , σ ⇒ τ ) ⊤ (λ _ -> Γ ▻ σ , τ )
App : ∀ {Γ σ τ} -> TermE (Γ , τ ) Bool (λ b -> Γ , app-arg b σ τ)
In TermE i r i′
i
is an output index (e.g. lambda abstractions (Lam
) construct function types (σ ⇒ τ
) (for ease of description I'll ignore that indices also contain contexts besides types)), r
represents a number of inductive positions (Var
doesn't (⊥
) receive any TermE
, Lam
receives one (⊤
), App
receives two (Bool
) — a function and its argument) and i′
computes an index at each inductive position (e.g. the index at the first inductive position of App
is σ ⇒ τ
and the index at the second is σ
, i.e. we can apply a function to a value only if the type of the first argument of the function equals the type of the value).
To construct a real lambda term we must tie the knot using something like a W
data type. Here is the definition:
data Wer {R} (Ψ : Effect R) : Effect R where
call : ∀ {r A r′ B r′′} -> Ψ r A r′ -> (∀ x -> Wer Ψ (r′ x) B r′′) -> Wer Ψ r B r′′
It's the indexed variant of the Oleg Kiselyov's Freer
monad (effects stuff again), but without return
. Using this we can recover the usual constructors:
_<∨>_ : ∀ {B : Bool -> Set} -> B true -> B false -> ∀ b -> B b
(x <∨> y) true = x
(x <∨> y) false = y
_⊢_ : Con -> Type -> Set
Γ ⊢ σ = Wer TermE (Γ , σ) ⊥ λ()
var : ∀ {Γ σ} -> σ ∈ Γ -> Γ ⊢ σ
var v = call (Var v) λ()
ƛ_ : ∀ {Γ σ τ} -> Γ ▻ σ ⊢ τ -> Γ ⊢ σ ⇒ τ
ƛ b = call Lam (const b)
_·_ : ∀ {Γ σ τ} -> Γ ⊢ σ ⇒ τ -> Γ ⊢ σ -> Γ ⊢ τ
f · x = call App (f <∨> x)
The whole encoding is very similar to the corresponding encoding in terms of indexed containers: Effect
corresponds to IContainer
and Wer
corresponds to ITree
(the type of Petersson-Synek Trees). However the above encoding looks simpler to me, because you don't need to think about things you have to put into shapes to be able to recover indices at inductive positions. Instead, you have everything in one place and the encoding process is really straightforward.
So what am I doing here? Is there some real relation to the indexed containers approach (besides the fact that this encoding has the same extensionality problems)? Can we do something useful this way? One natural thought is to built an effectful lambda calculus as we can freely mix lambda terms with effects, since a lambda term is itself just an effect, but it's an external effect and we either need other effects to be external as well (which means that we can't say something like tell (var vz)
, because var vz
is not a value — it's a computation) or we need to somehow internalize this effect and the whole effects machinery (which means I don't know what).