31
votes

So I've been reading about coinduction a bit lately, and now I'm wondering: are Haskell lists inductive or coinductive? I've also heard that Haskell doesn't distinguish the two, but if so, how do they do so formally?

Lists are defined inductively, data [a] = [] | a : [a], yet can be used coinductively, ones = a:ones. We can create infinite lists. Yet, we can create finite lists. So which are they?

Related is in Idris, where the type List a is strictly an inductive type, and is thus only finite lists. It's defined akin to how it is in Haskell. However, Stream a is a coinductive type, modeling an infinite list. It's defined as (or rather, the definition is equivalent to) codata Stream a = a :: (Stream a). It's impossible to create an infinite List or a finite Stream. However, when I write the definition

codata HList : Type -> Type where
    Nil : HList a
    Cons : a -> HList a -> HList a

I get the behavior that I expect from Haskell lists, namely that I can make both finite and infinite structures.

So let me boil them down to a few core questions:

  1. Does Haskell not distinguish between inductive and coinductive types? If so, what's the formalization for that? If not, then which is [a]?

  2. Is HList coinductive? If so, how can a coinductive type contain finite values?

  3. What about if we defined data HList' a = L (List a) | R (Stream a)? What would that be considered and/or would it be useful over just HList?

3
Haskell has no notion of inductive or coinductive types. (It doesn't really need one because it's a partial and lazy language.) So it doesn't really make sense to ask whether a given Haskell type is inductive or coinductive.Benjamin Hodgson
Your HList type is usually called a co-list.Cactus

3 Answers

25
votes
  1. Due to laziness, Haskell types are both inductive and coinductive, or, there is no formal distinguishment between data and codata. All recursive types can contain an infinite nesting of constructors. In languages such as Idris, Coq, Agda, etc. a definition like ones = 1 : ones is rejected by the termination checker. Laziness means that ones can be evaluated in one step to 1 : ones, whereas the other languages evaluate to normal form only, and ones does not have a normal form.

  2. 'Coinductive' does not mean 'necessarily infinite', it means 'defined by how it is deconstructed', wheras inductive means 'defined by how it is constructed'. I think this is an excellent explanation of the subtle difference. Surely you would agree that the type

    codata A : Type where MkA : A

    cannot be infinite.

  3. This is an interesting one - as opposed to HList, which you can never 'know' if it is finite or infinite (specifically, you can discover in finite time if a list is finite, but you can't compute that it is infinite), HList' gives you a simple way to decide in constant time if your list is finite or infinite.

19
votes

In a total language like Coq or Agda, inductive types are those whose values can be torn down in finite time. Inductive functions must terminate. Coinductive types, on the other hand, are those whose values can be built up in finite time. Coinductive functions must be productive.

Systems that are intended to be useful as proof assistants (like Coq and Agda) must be total, because non-termination causes a system to be logically inconsistent. But requiring all functions to be total and inductive makes it impossible to work with infinite structures, thus, coinduction was invented.

So the purpose of inductive and coinductive types is to reject possibly non-terminating programs. Here's an example in Agda of a function which is rejected because of the productivity condition. (The function you pass to filter could reject every element, so you could be waiting forever for the next element of the resulting stream.)

filter : {A : Set} -> (A -> Bool) -> Stream A -> Stream A
filter f xs with f (head xs)
... | true = head xs :: filter f (tail xs)
... | false = filter f (tail xs)  -- unguarded recursion

Now, Haskell has no notion of inductive or coinductive types. The question "Is this type inductive or coinductive?" is not a meaningful one. How does Haskell get away without making the distinction? Well, Haskell was never intended to be consistent as a logic in the first place. It's a partial language, which means that you're allowed to write non-terminating and non-productive functions - there's no termination checker and no productivity checker. One can debate the wisdom of this design decision but it certainly renders induction and coinduction redundant.

Instead, Haskell programmers are used to reasoning informally about a program's termination/productivity. Laziness lets us work with infinite data structures, but we don't get any help from the machine to ensure that our functions are total.

5
votes

To interpret type-level recursion one needs to find a "fixed point" for the CPO-valued list functor

F X = (1 + A_bot * X)_bot

If we reason inductively, we want the fixed point to be "least". If coinductively, "greatest".

Technically, this is done by working in the embedding-projection subcategory of CPO_bot, taking e.g. for the "least" the colimit of the diagram of embeddings

0_bot |-> F 0_bot |-> F (F 0_bot) |-> ...

generalizing the Kleene's fixed point theorem. For the "greatest" we would take the limit of the diagram of the projections

0_bot <-| F 0_bot <-| F (F 0_bot) <-| ...

It however turns out that the "least" is isomorphic to the "greatest", for any F. This is the "bilimit" theorem (see e.g. Abramsky's "Domain Theory" survey paper).

Perhaps surprisingly, it turns out that the inductive or coinductive flavor comes from the liftings applied by F instead of the least/greatest fixed points. For instance, if x is the smashed product and # is the smashed sum,

F X = 1_bot # (A_bot x X)

would have as a bilimit the set of finite lists (up to iso).

[I hope I got the liftings right -- these are tricky ;-) ]