1
votes

As far as I understand, almost all dependently typed languages use weak head normal form for convertibility. Why is it so? Why is it enough to check for convertibility (it seems not enough for me)? What can you recommend to read on this?

2
You have to remember that normalization can cause an exponential blow-up. There's no need to spend all that time computing if the head constructors are already distinct.gallais

2 Answers

3
votes

Weak-head normalization is sufficient and more efficient for the base cases.

x1 = x1 : t
x1 = x2 : t, x1 ≠ x2
x1 t1 ... tn = x2 : t,
x1 t1 ... tn = x2 s1 ... sn : t, x1 ≠ x2

For the recursive case, the function will be called on the pairs of subterms (ti, si) anyway, so there's no need for reducing them eagerly.

x1 t1 ... tn = x1 s1 ... sn : t

You can read more about this around page 230 of Advanced Topics in Types and Programming Languages edited by Benjamin Pierce. You can also find a lot of papers about type inference and normalization for pure type systems on the web.

This is a question for Theoretical Computer Science though.

1
votes

A moderator can merge this answer with the above. These are in alphabetical order.

  • http: //www.informatik.uni-trier.de/~ley/pers/hd/j/Jutting:L=_S=_van_Benthem
  • http: //www.informatik.uni-trier.de/~ley/pers/hd/p/Pagano:Miguel
  • http: //www.cs.le.ac.uk/people/ps56/publications.xml
  • http: //www.lix.polytechnique.fr/~vsiles/papers/
  • http: //www.cs.rhul.ac.uk/home/zhaohui/type.html