4
votes

Functional Programming in C++, at page 214, with reference to an expected<T,E> monad which is the same as Haskell's Either, reads

[...] as soon as any of the functions you're binding to returns an error, the execution will stop and return that error to the caller.

Then, in a caption just below, it reads

If you call mbind [equivalent to Haskell's >>=] on an expected that contains an error,, mbind won't even invoke the transformation function; it will just forward that error to the result.

which seems to "adjust" what was written earlier. (I'm pretty sure that either LYAH or RWH underlines somewhere that there's no short-circuiting; if you remember where, please, remind me about it.)

Indeed, my understanding, from Haskell, is that in a chain of monadic bindings, all of the bindings happen for real; then what they do with the function passed to them as a second argument, is up to the specific monad.

In the case of Maybe and Either, when the bindings are passed a Nothing or Left x argument, then the second argument is ignored.

Still, in this specific two cases, I wonder if there's a performance penalty in doing something like this

justPlus1 = Just . (+1)
turnToNothing = const Nothing
Just 3 >>= turnToNothing >>= justPlus1
                         >>= justPlus1
                         >>= justPlus1
                         >>= justPlus1
                         >>= justPlus1

as in these cases the chain can't really do anything other than what it does, given that

Nothing >>= _ = Nothing
Left l >>= _ = Left l
2
I think the answer is that you don't pay anything for what comes after the Nothing, but you do pay for unwinding the stack before that, but I'm not entirely sure. Continuation passing style is supposed to be more efficient in some way in some contexts, for example parser combinator libraries use CPS instead of Either for errors.Hjulle
@Hjulle, and is this CPS-thing a monad too?Enlico
@EnricoMariaDeAngelis: Yeah, in the most general case, continuations form a monad/transformer Cont/ContT; if you have an existing monad and don’t want to rewrite it to use CPS internally to avoid costly left-associated binds, you can use Codensity to do it automatically (shallowly).Jon Purdy

2 Answers

2
votes

Consider the following expression:

result :: Maybe Int
result = x >>= f >>= g >>= h

In that expression, of course, x :: Maybe a for some a, and each of f, g, and h are functions, with h returning Maybe Int but the intermediate types of the pipeline could be anything wrapped in a Maybe. Perhaps f :: String -> Maybe String, g :: String -> Maybe Char, h :: Char -> Maybe Int.

Let's make the associativity explicit as well:

result :: Maybe Int
result = ((x >>= f) >>= g) >>= h

To evaluate the expression, each bind (>>=) must in fact be called, but not necessarily the functions f, g, or h. Ultimately the bind to h needs to inspect its left-hand argument to decide whether it is Nothing or Just something; in order to determine that we need to call the bind to g, and to decide that we need to call the bind to f, which must at least look at x. But once any one of these binds produces Nothing, we pay only for checking Nothing at each step (very cheap), and not for calling the (possibly expensive) downstream functions.

Suppose that x = Nothing. Then the bind to f inspects that, sees Nothing, and doesn't bother to call f at all. But we still need to bind the result of that in order to know if it's Nothing or not. And this continues down the chain until finally we get result = Nothing, having called >>= three times but none of the functions f, g, or h.

Either behaves similarly with Left values, and other monads may have different behaviors. A list may call each function one time, many times, or no times; the tuple monad calls each function exactly once, having no short-circuiting or other multiplicity features.

1
votes

You seem to have a misunderstanding of how the Monad instances for these types work in Haskell. You say:

Indeed, my understanding, from Haskell, is that in a chain of monadic functions, all of the functions are called,

But this demonstrably isn't the case. Indeed any time you compute

Nothing >>= f

where f is any function of type a -> Maybe b, then it is computed according to the implementation of >>= for the Maybe monad, which is:

Just x >>= f = f x
Nothing >>= f = Nothing

So f will indeed be called in the Just case, but not in the Nothing case. So we see that there is indeed "short circuiting". Indeed, since Haskell is lazy, every function "short circuits" by default - nothing is ever computed unless it's needed to produce a result.

It's an interesting question to ask about performance - and not a question I personally know how to answer. Certainly, as I just explained, none of the following functions in the chain will be evaluated once a Nothing is encountered - but performing the pattern matching to see this is unlikely to be free. Perhaps the compiler is able to optimise this away, by reasoning that it can abandon the entire calculation once it hits a Nothing. But I'm not sure.