47
votes

Dependent types are often advertised as a way to enable you to assert that a program is correct up to a specification. So, for example, you are asked to write a code that sorts a list - you are able to prove that code is correct by encoding the notion of "sort" as a type, and writing a function such as List a -> SortedList a. But how do you prove that the specification, SortedList, is correct? Wouldn't it be the case that, the more complex your specification is, the more likely it would be that your encoding of that specification as a type is incorrect?

7
@closer I understand the question might not be clear, but would you comment on what you didn't understand and/or suggest how I could improve it?MaiaVictor
The mentioned issue - "how to tell my spec is right" - is sometimes used by detractors of formal methods. In the worst case the argument goes as far as saying "having a formal spec is useless since it is no simpler than the actual implementation". While there's indeed some truth in that, the argument conveniently neglects the fact that there are partial specs (e.g. "no runtime errors, ever") which are very simple, important, and frequently violated by complex software.chi
@chi I understand , but I don't know about that. It is just a legit question I had.MaiaVictor
Indeed it is! I was only trying to provide some context to the question.chi
@Viclib mazzo.li/posts/AgdaSort.html, read that. Very accessible and a wonderful delight.Musa Al-hassy

7 Answers

37
votes

This is the static, type-system version of, How do you tell that your tests are correct?

The only answer I can honestly give is, yes, the more complex and unwieldy your specification, the more likely you are to have made a mistake. You can mess up in writing something in a type theoretic formalism just as well as you can in formalizing the description of your program as an executable function.

The hope is that your specification is simple and small enough to judge by examination, while your implementation of that might be far larger. It helps that, once you have some "seed" ideas formalized, you can show that the ideas derived from these are correct. From that point of view, the more readily you can mechanically and provably derive parts of your specification from simpler parts, and ultimately derive your implementation from your specification, the more likely you are to get a correct implementation.

But it can be unclear how to formalize something, which has the effect that either you might make a mistake in translating your ideas into the formalism – you might think you proved one thing, when actually you proved another – or you might find yourself doing type theory research in order to formalize an idea.

19
votes

This is a problem with any specification language (even English), not just dependent types. Your own post is a good example: it contains an informal specification of "sort function" that only requires the result to be sorted, which is not what you want (\xs -> [] would qualify). See e.g. this post from Twan van Laarhoven's blog.

14
votes

I think it's the other way around: a well-typed program can't prove nonsense (assuming the system is constistent), while specifications can be inconsistent or just silly. So it's not "how to make sure this piece of code reflects my platonic ideas?", but rather "how to make sure my ideas meaningfully project onto a well-founded plane of pure syntactic rules?". How to make sure the bird you see is a mockingbird [for some supplied notion of mockingbirdness]? Well, study birds and raise you chances to be right. But as always with humans, you can't be 100% sure.

Type theory is a way to mitigate the imperfectness of human mind by introducing formal rules, machine-checked proofs (it's a very relevant paper) and other stuff, that allows to focus and thus to simplify problems a lot (as Brouwer said: "Mathematics is nothing more, nothing less, than the exact part of our thinking"), but you can't expect any tool to make your thoughts "right", because there is just no uniform notion of rightness. IOW, there is no way to formally connect informal and formal: being informal is like being inside the IO monad — there is no escape.

So it's not "does this syntax reflects my very precise semantics?", but rather "can I attach my raw semantics to this strongly structured syntax?". Programs are proper material objects, while ideas are cumbersome approximations, that can become proper material objects only by convention. So we form some basis using conventions, and then we just trust it, because it's much more sensible to trust to a small subset of all your numerous ideas than to all of them.

7
votes

One thing formal methods can do that I don't think others have touched on is help relate simple things to more complex ones. You may not know for sure how to specify exactly how a Set data structure should behave, but if you can write a simple version based on sorted lists, you can then prove that your fancy version based on balanced search trees relates to it correctly through the toList function. That is, you can use formal methods to transfer your confidence in sorted lists to balanced search trees.

6
votes

How do you prove that math is correct? That is, how do you prove that integer addition is the correct way to count apples, or how do you prove that real addition is the correct way to add weights? There is always an interface between the formal / mathematical and the informal / real. It requires skill and mathematical / physical taste to find the appropriate formalism for solving a particular problem. Formal methods won't eliminate that.

The value of formal methods is twofold:

  1. You're not going to know if your program is correct, unless you know what properties it actually satisfies. Before you know if your sort routine is "correct", you first have to know what it actually does. Any procedure for finding that out is going to be a formal method (even unit testing!), so people who reject "formal methods" are really just limiting themselves to a tiny subset of the available methods.

  2. Even when you know how to find out what a program actually does, people make mistakes in their mathematical reasoning (we are not rational creatures, whatever certain ideologies may claim); so it's helpful to have a machine check us. This is the same reason we use unit tests --- it's nice to run a desk check and make sure the program does what we want; but having the computer do the check and tell us whether the result is correct helps prevent mistakes. Letting the computer check our proofs about the behavior of the program is helpful for exactly the same reason.

2
votes

Coming late to the party, but AFAICT, noone has yet mentioned another important aspect: in the context of program verification, having a bug in the spec is not always too terrible, because you can use the code to check the spec.

IOW, the proof doesn't say "the code is right", but "the code and the spec are mutually consistent". So, in order for a bug in the spec to go unnoticed, it has to be one of:

  • an underspecified spec.
  • a bug in the spec matched by a corresponding bug in the code.

As someone else pointed out: the problem is the same for tests.

1
votes

Suppose your function is not top level one, but used by somebody else as part of some module, which also has correctness proof. The latter must use correctness proof of your function, and if it is bad, module will not compile. The module itself still can have mistakes, but it is not problem of yours anymore.