6
votes

Last year I asked "Dependent types can prove your code is correct up to a specification. But how do you prove the specification is correct?". The most voted answer presents the following reasoning:

The hope is that your specification is simple and small enough to judge by examination, while your implementation of that might be far larger.

That line of reasoning makes sense to me. Idris is the most accessible language to test those concepts; yet, since it can be used pretty much as Haskell, often it leaves the programmer wandering through old concepts, without knowing where to apply dependent types. Some real world examples could help on that, so, what are good, concrete examples of programs that happen in practice, are simple to express as types, but complex to implement?

2
I wasn't sure if questions asking for examples fall under the "too broad" rule, but I've and found several highly voted questions similar to this one, so I guessed it was OK? A, B, C, D, etc...MaiaVictor
Enforcing invariants in types for data-structures. The invariants may be easy to state, but tracking them through the code is tougher. This question about queues comes to mind.Alec
What do you mean by "hard to implement"? If I interpret this directly, then we can have things like termination of Collatz sequences, which is a expressed as a few lines of types in Agda.András Kovács
@AndrásKovács noo! I just mean daily programming stuff that take a ton of bug-prone lines but you can express important invariants as dependent types so when it compiles you can feel safe it doesn't have that particular bug. Just looking for some inspiration. No hard math things.MaiaVictor
OK, but then what's "hard to implement"?András Kovács

2 Answers

4
votes

I'll answer this:

often it leaves the programmer wandering through old concepts, without knowing where to apply dependent types

Sure types can be used to eliminate certain types of silly bugs, like when you apply a function to its arguments in a wrong order, but this is not what types are really for. Types structure your reasoning and allow to zoom into the structure of your computation.

Say you process lists and use head and tail a lot, but these are partial functions, so you decide to switch to something safer and now you process NonEmpty a instead of [a]. Then you realize that you also do a lot of lookups (a partial function again) and that it won't be too troublesome to maintain lengths of your lists statically in this particular case, so you switch to something like NonEmptyVec n a, where n is the statically known length of a vector. Now you have eliminated lots of possible bugs, but this is not the most important thing that happened.

The most important thing is that now you look at the type signatures and see what kind of input functions expect and what kind of output they produce. Possible behaviour of a function has been narrowed by its type signature and now it's much easier to identify where in the pipeline the function belongs. But also the more detailed types you have, the more encapsulated your entity is: a function of type NonEmpty a -> b no longer relies on the assumption that a non-empty list will be passed to it, instead it explicitly requires this invariant to hold. You've turned a jelly-like tightly coupled computation into a fine-grained one.

What rich types are for is to guide humans (before code writing, during code writing, after code writing) and reduce their ability to produce bugs in the first place — not for spotting them a posteriori. Simple types I consider unavoidable, because even if you write code in a dynamically typed language, you still distinguish between a string and a picture.


Enough chit-chat, here is a real world example of usefulness and, more importantly, naturalness of dependent types. I'm targeting an API with the help from the Servant library (which is an amazing piece of code and is dependently typed also, so you might want to check it):

type API a r = ReqBody '[JSON] (Operation a) :> Post '[JSON] (Result r)

So we send a request of type Operation a (automatically encoded to JSON by Servant) and receive a Result r response (automatically decoded from JSON by Servant). Operation and Result are defined like this:

data Method = Add | Get

data Operation a = Operation
  { method :: !Method
  , params :: !a
  }

data Result a = Result
  { result :: !a
  }

The task is to perform operations and receive responses from a server. The problem however is that when we Add things, the server responses with a AddResults, and when we Get things, the server's response depends on what we passed together with the Get method. So we write a type family:

type family ResultOf m a where
  ResultOf Add a               = AddResults
  ResultOf Get DictionaryNames = Dictionaries

The code reads better than my description above. It only remains to lift Methods to the type level, so we define an appropriate singleton (which is the way to emulate dependent types in Haskell currently):

data SMethod m where
  SAdd :: SMethod Add
  SGet :: SMethod Get

And here is the type signature of the main function (omitted lots of unrelated things):

perform :: SMethod m -> a -> ClientM (ResultOf m a)

perform receives a method in a singleton form and some value and returns a computation in the Servant's ClientM monad. This computation returns a result which type depends on both the method and the type of the value: if we SAdd, we get AddResults; if we SGet DictionaryNames, we get Dictionaries. Very sensible and very natural — no need to invent where to apply dependent types: the task loudly requires them.

3
votes

This is odd to me because my problem is that dependent types are needed everywhere. If you do not see that then look at programs this way.

Say we have f :: Ord a => [a] -> [a] (I will use the Haskell notations). What do we know about this function f? In other words, what can you predict about applications such as f [], f [5,8,7], f [1,1,2,2]? Say you know f x = [4,6,8] then what can you say about x? As you can observe, we know little.

Then suppose I told you the real name of f is sort. What can you tell me then about those same examples? What can you tell me about ys in relation to xs in f xs = ys? Now you know a lot, but where did this information come from? All I did was change the name of the function; this has no significance in the formal meaning of the program.

All this new information came from what you know about sorting. You know two defining characteristics:

  1. sort xs is a permutation of xs.
  2. sort xs is monotonically increasing.

We can use dependent typing to prove both of these properties for sort. Then, it isn't just a matter of our extrinsic understanding of sorting; the meaning of sorting becomes an intrinsic part of the program.

Catching bugs is a side effect. The real objective is to specify and formalise what we have to know in our heads anyways as part of the program.

Carefully reconsider the programs you have already written. What are the facts that make your program work that are only known in your head? These are all candidate examples.