16
votes

I was playing around with van Laarhoven lenses and ran into a problem where the type-checker rejects the eta-reduced form of a well-typed function:

{-# LANGUAGE RankNTypes #-}

import Control.Applicative

type Lens c a = forall f . Functor f => (a -> f a) -> (c -> f c)

getWith :: (a -> b) -> ((a -> Const b a) -> (c -> Const b c)) -> (c -> b)
getWith f l = getConst . l (Const . f)

get :: Lens c a -> c -> a
get lens = getWith id lens

The above type-checks but if I eta-reduce get to

get :: Lens c a -> c -> a
get = getWith id

Then GHC (7.4.2) complains that

Couldn't match expected type `Lens c a'
            with actual type `(a0 -> Const b0 a0) -> c0 -> Const b0 c0'
Expected type: Lens c a -> c -> a
  Actual type: ((a0 -> Const b0 a0) -> c0 -> Const b0 c0)
               -> c0 -> b0
In the return type of a call of `getWith'
In the expression: getWith id

I can understand that if the function didn't have an explicit type-signature then eta-reduction in combination with the monomorphism restriction might confuse the type inference, especially when we are dealing with higher rank types, but in this case I'm not sure what's going on.

What causes GHC to reject the eta-reduced form and is this a bug/limitation in GHC or some fundamental problem with higher rank types?

2
Does it make a difference when you write the Lens type expanded?Ingo

2 Answers

12
votes

I'd say that the reason isn't in the η-reduction itself, the problem is that with RankNTypes you lose principal types and type inference.

The problem with type inference with higher-order ranks is when inferring the type of λx.M to obey the rule

     Γ, x:σ |- M:ρ
----------------------
  Γ |- λx:σ.M : σ→ρ

we don't know what type σ we should choose for x. In the case of Hindley-Milner type system, we limit ourselves to type-quantifier-free types for x and the inference is possible, but not with arbitrary ranked types.

So even with RankNTypes, when the compiler encounters a term without explicit type information, it resorts to Hindley-Milner and infers its rank-1 principal type. However, in your case the type you need for getWith id is rank-2 and so compiler can't infer it by itself.

Your explicit case

get lens = getWith id lens

corresponds to the situation where the type of x is already given explicitly λ(x:σ).Mx. The compiler knows the type of lens before type-checking getWith id lens.

In the reduced case

get = getWith id

the compiler has to infer the type of getWidth id on it's own, so it sticks with Hindley-Milner and infers the inadequate rank-1 type.

5
votes

Actually it's quite straight-forward: GHC infers the types per expression, then starts to unify them across =. This works always fine when there are only rank-1-types around, because the most polymorphic one is chosen (that's well-defined); so any unification that's possible at all will succeed.

But it will not choose a more general rank-2-type even if that would be possible, so getWith id is inferred to be ((a -> Const a a) -> c -> Const a c) -> (c -> a) rather than (forall f . Functor f => (a -> f a) -> c -> f c) -> (c -> a). I suppose if GHC did do such stuff, traditional rank-1 type inference wouldn't work at all anymore. Or it would just never terminate, because there doesn't exist one well-defined most polymorphic rank-n type.

That doesn't explain why it can't see from get's signature that it needs to choose rank-2 here, but presumably there is a good reason for that as well.