Background Context:
Mathematically, I can see the need for associativity to keep things simple without relying on order. All implementations of example monads that I've come across (blogs, books etc.,) seem to always work. It seems simply the act of having map, flatMap
(Scala) or fmap, >>=
(Haskell) makes things a working monad.
From what I gather this isn't entirely true, but can't come up with a counter example showing the "need" for the law via a failure case.
Wadler's paper mentions the possibility of an incorrect implementation:
The Haskell Wiki mentions the following:
The third law is a kind of associativity law for
>>=
. Obeying the three laws ensures that the semantics of the do-notation using the monad will be consistent.Any type constructor with return and bind operators that satisfy the three monad laws is a monad. In Haskell, the compiler does not check that the laws hold for every instance of the Monad class. It is up to the programmer to ensure that any Monad instance they create satisfies the monad laws.
Question(s):
- What is an example of an incorrect monad implementation, that looks correct but breaks associativity?
- How does this impact the
do
-notation? - How does one validate the correctness of a monad implementation? Do we need to write test cases for each new monad, or a generic one can be written to check that any monad implementation is correct?