0
votes

Example

sumArray :: Array Int -> State Int Unit
sumArray = traverse_ \n -> modify \sum -> sum + n

t1 :: Int
t1 = execState (do
  sumArray [1, 2, 3]
  sumArray [4, 5]
  sumArray [6]) 0
-- returns 21

module Main where

import Prelude
import Effect (Effect)
import Data.Foldable (fold, traverse_)
import Control.Monad.State (State, execState)
import Control.Monad.State.Class (modify)
import Data.Maybe (Maybe(..))
import Effect.Console (log)

main :: Effect Unit
main = log $ show t1

sumArray :: Array Int -> State Int Unit
sumArray = traverse_ \n -> modify \sum -> sum + n

t1 :: Int
t1 = execState (do
  sumArray [1, 2, 3]
  sumArray [4, 5]
  sumArray [6]) 0

{-
execState :: forall s a. State s a -> s -> s
execState (StateT m) s = case m s of Identity (Tuple _ s') -> s'

type State s = StateT s Identity
-}

Description

How I understand the evaluation of expression t1:

  1. Each sumArray invocation returns a state monad holding the sum of given Int array values.
  2. All three state monads are (somehow) unified into a single one, while accumulating intermediate sums.
  3. execState returns the overall Int sum, given State Int Unit and initial value as input.

Issue

I cannot quite understand step 2 in particular. According to do notation, an expression like sumArray [1, 2, 3] desugars to bind x \_ -> ..., so former input is ignored. If I write do with a different monad type like

t2 :: Maybe Int
t2 = do
  Just 3
  Just 4

, the compiler complains:

A result of type Int was implicitly discarded in a do notation block. You can use _ <- ... to explicitly discard the result.

, so rules seem to be a bit different with t1.

Question

How exactly are the three separate state monads combined into a single one? More specifically: Why does the runtime calculate the overall sum of all intermediate state monad sum results, and not something like (1+2+3) * (4+5) * 6? In other words: where is the implicit + accumulator?

I feel like I miss some concept from Chapter 11: Monadic Adventures.

2

2 Answers

3
votes

@Bergi's answer already gives the essential explanation - but I thought it might be useful to expand on some parts of what he says, and to answer some of your questions more directly.

According to do notation, an expression like sumArray [1, 2, 3] desugars to bind x \_ -> ..., so former input is ignored.

This is in one sense perfectly true, but also betrays some misunderstandings.

For one thing, I find that quote misleadingly phrased - although it's perfectly acceptable in the context of the original source. It's not talking about how an expression like sumArray [1, 2, 3] "desugars" in and of itself, but about how successive lines ("statements") of a do block are desugared into a single expression that "combines" them - which seems to be in essence what your whole question is about. So yes, it's true - and basically the definition of do notation - that an expression like

do a <- x
   y

desugars to bind x \a -> y (where we imagine that y is some more complex expression which presumably involves a). And likewise that

do x
   y

desugars to bind x \_ -> y. But the latter case isn't "ignoring input" - it's ignoring output. Let me explain this a little more.

It's common to think of a general monadic value, of type m a, to be some sort of "computation" that "produces" value(s) of type a. That's necessarily quite an abstract formulation - because a Monad is such a general concept, and some specific Monads fit this mental picture better than others. But it's a good way to understand the basics of monads, and do notation in particular - each line can be thought of as a "statement" in some imperative language, which may have some "side effects" (of a kind strictly constrained by the particular monad you're using) and also produces a value as a "result".

In this sense, a do block of the first type above - where we "bind" the result using the "left arrow" notation - is using that computed value (denoted by a) to decide what to do next. (Incidentally, this is what distinguishes monads from applicatives - if you just have a series of computations and just want to combine their "effects", without allowing "intermediate results" to affect what you're doing, you don't actually need monads or bind.) Whereas the second one doesn't use the result of the first computation (that computation being x) - which is exactly what I meant when I said this is "ignoring output". It's ignoring the result of x. That doesn't (necessarily) mean that x is useless though. It's still being used for its "side effects".

To make it more concrete, I'll look in a bit more detail at both of your examples, starting with the simple one in the Maybe monad (I'll make the change that the compiler suggests in order to keep it happy - note that I'm personally much more familiar with Haskell than with Purescript so I may get Purescript-specific things like this wrong, as Haskell would be perfectly OK with your original code):

t2 :: Maybe Int
t2 = do
  _ <- Just 3
  Just 4

In this case, t2 will simply be equal to Just 4, and it will seem - correctly - that the first line of the do block is redundant. But that's just a consequence of how the Maybe monad works, as well as of the specific value we've got there. I can easily prove to you that the first line does still matter though, by making this change

t2 :: Maybe Int
t2 = do
  _ <- Nothing
  Just 4

Now you will find that t2 is equal not to Just 4, not to Nothing!

That's because each "computation" in the Maybe monad - that is, each value of type Maybe a - either "succeeds" with a "result" of type a (represented by a Just value), or "fails" (represented by Nothing). And, importantly, the way the Maybe monad is defined - that is, the definition of bind - deliberately propagates failure. That is, any Nothing value that is encountered at any point immediately terminates the computation with a Nothing result.

So even here, the "side effect" of the first computation - the fact that it succeeds or fails - does make a major difference to what happens overall. We just ignore the "result" (the actual value if the computation was successful).

If we now move to the State monad - this is a somewhat more complicated monad than Maybe, but may actually for that reason make the above points easier to understand. Because this is a monad where it really does make immediate sense to talk about the "side effects" and the "result" of each monadic value - which perhaps felt a bit forced, or even silly, in the Maybe case.

A value of type State s a represents a computation that results in a value of type a, while "keeping some state" of type s. That is, the computation may use the current state in order to compute its result, and/or it may update the state as part of the computation. Concretely, this is the same as a function of type s -> (a, s) - it takes some state, and returns an updated state (possibly the same) as well as the computed value. And indeed, the State s a type is essentially a simple newtype wrapper for such a function type.

And the implementation of bind in its Monad instance does the most obvious and natural thing - much easier to explain in words than to "see" from the actual implementation details. Two such "stateful functions" are combined by feeding the original state to the first function, then taking the updated state from that and feeding that to the second function. (Actually, bind needs to do - and does - more than this, because as I mentioned earlier, it needs to be able to use the "result" - the a - from the first computation to decide what to do with the second. But we don't need to go into that now, because in this example we don't use the result value - and indeed couldn't, as it's always of the trivial Unit type. It isn't actually complicated, but I won't go into the detail as I don't want to make this answer even longer!)

So when we do

do
  sumArray [1, 2, 3]
  sumArray [4, 5]
  sumArray [6]

we are building a stateful computation of type State Int Unit - that is, a function of type Int -> (Unit, Int). Since Unit is a non-interesting type, and basically used as a placeholder here for "we don't care about any result", we're basically building a function of type Int -> Int from three other such functions. That's easy enough to do - we can just compose the three functions! And that's what, in this simple case, the implementation of bind for the State monad ends up doing.

Hopefully this answers your main question:

where is the implicit + accumulator?

by showing that there's no "implicit accumulator" other than function composition. It's the fact that those individual functions happen to add (respectively, in this case) 6, 9 and 6 to the input that results in the final result being the sum of those 3 numbers (due to the fact that the composition of two sums is itself a sum, which ultimately comes from the associativity of addition).

But more importantly, I hope this has given you a fuller explanation of Monads, and do notation, which you can apply to many other situations.

2
votes

Each sumArray invocation returns a state monad holding the sum of given Int array values.

No, it doesn't return "a state monad", and it doesn't hold the sum of the array.

It returns a State Int Unit value, which represents a "stateful computation of a Unit" (using an Int for the state). To get the sum, you actually have to run that computation:

t :: State Int Unit
t = sumArray [1, 2, 3]

x = runState t 0 // ((), 6)
y = runState t 5 // ((), 11)

Notice how for the value y, the sum of the array elements is never computed - it adds 1 to 5, then 2 to 6, then 3 to 8.

How exactly are the three separate state monads combined into a single one?

The key to understanding is that they are not state values, but stateful computations. They can be combined by simply sequencing those computations one after the other, passing the result and state on to the next.