5
votes

I'm using the operational monad by Heinrich Apfelmus. I'd like to parameterize the interpreter with the monad for the result type. The following version of my code compiles:

{-# LANGUAGE GADTs #-}

import Control.Monad.Operational

data EloI a where
    Display :: Int -> EloI ()

type Elo a = Program EloI a

interpret :: Monad m => (Int -> m ())
                     -> Elo a
                     -> Int
                     -> m a
interpret display = interp
  where
    interp :: Monad m => Elo a -> Int -> m a
    interp = eval . view
    eval :: Monad m => ProgramView EloI a -> Int -> m a
    eval (Display i :>>= is) s = interp (is ()) s

Now I change the last line to

eval (Display i :>>= is) s = display i >> interp (is ()) s

and type inference doesn't succeed anymore, I get the output

Could not deduce (m ~ m1) from the context (Monad m)
bound by the type signature for interpret :: Monad m => (Int -> m ()) -> Elo a -> Int -> m a (...)

When I remove the type signature for interp, I get an additional error (could not deduce (a1 ~ a)).
When I change all m to IO (like in the tic tac toe example for the operational monad), then it compiles again.
Am I trying something that does not make sense or can I provide some hint to GHC? I have to admit I'm not sure that I need this flexibility.

1

1 Answers

8
votes

That's because the m in the local type signatures are fresh type variables, so they promise to work with any Monad. If you use display, eval can only work for the specific monad display uses. It should work if you a) remove the local type signatures, or b) bring the type variable m into scope

{-# LANGUAGE ScopedTypeVariables #-}
...
interpret :: forall m. (Int -> m ()) -> Elo a -> Int -> m a