5
votes

I have started learning Haskell and I have read that every function in haskell takes only one argument and I can't understand what magic happens under the hood of Haskell that makes it possible and I am wondering if it is efficient.

Example


>:t (+)
(+) :: Num a => a -> a -> a

Signature above means that (+) function takes one Num then returns another function which takes one Num and returns a Num


Example 1 is relatively easy but I have started wondering what happens when functions are a little more complex.

My Questions


For sake of the example I have written a zipWith function and executed it in two ways, once passing one argument at the time and once passing all arguments.

zipwithCustom f (x:xs) (y:ys) = f x y : zipwithCustom f xs ys
zipwithCustom _ _ _ = []
zipWithAdd = zipwithCustom (+)
zipWithAddTo123 = zipWithAdd [1,2,3]

test1 = zipWithAddTo123 [1,1,1]
test2 = zipwithCustom (+) [1,2,3] [1,1,1]

>test1
[2,3,4]
>test2
[2,3,4]
  1. Is passing one argument at the time (scenario_1) as efficient as passing all arguments at once (scenario_2)?
  2. Are those scenarios any different in terms of what Haskell is actually doing to compute test1 and test2 (except the fact that scenario_1 probably takes more memory as it needs to save zipWithAdd and zipWithAdd123)
  3. Is this correct and why? In scenario_1 I iterate over [1,2,3] and then over [1,1,1]
  4. Is this correct and why? In scenario_1 and scenario_2 I iterate over both lists at the same time

I realise that I have asked a lot of questions in one post but I believe those are connected and will help me (and other people who are new to Haskell) to better understand what actually is happening in Haskell that makes both scenarios possible.

1
in both scenario's you iterate over both lists at the same time.Willem Van Onsem
in test2 you also each time called it with one parameter, since zipWithCustom (+) [1,2,3] [1,1,1] is short for ((zipWithCustom (+)) [1,2,3]) [1,1,1].Willem Van Onsem
@hdw3: it does not run the function. haskell is lazy. It will under the hood only perform something when necessary, and when you thus enumerate over the list, you have passed the f and the lists.Willem Van Onsem
@hdw3: but more importantly, the idea is that if you call zipWithCustom (+) you simply construct a new function that will take the list, and then one that will take the other list. So zipWith (+) basically constructs a new function, not a list.Willem Van Onsem

1 Answers

10
votes

You ask about "Haskell", but Haskell the language specification doesn't care about these details. It is up to implementations to choose how evaluation happens -- the only thing the spec says is what the result of the evaluation should be, and carefully avoids giving an algorithm that must be used for computing that result. So in this answer I will talk about GHC, which, practically speaking, is the only extant implementation.

For (3) and (4) the answer is simple: the iteration pattern is exactly the same whether you apply zipWithCustom to arguments one at a time or all at once. (And that iteration pattern is to iterate over both lists at once.)

Unfortunately, the answer for (1) and (2) is complicated.

The starting point is the following simple algorithm:

  1. When you apply a function to an argument, a closure is created (allocated and initialized). A closure is a data structure in memory, containing a pointer to the function and a pointer to the argument. When the function body is executed, any time its argument is mentioned, the value of that argument is looked up in the closure.
  2. That's it.

However, this algorithm kind of sucks. It means that if you have a 7-argument function, you allocate 7 data structures, and when you use an argument, you may have to follow a 7-long chain of pointers to find it. Gross. So GHC does something slightly smarter. It uses the syntax of your program in a special way: if you apply a function to multiple arguments, it generates just one closure for that application, with as many fields as there are arguments.

(Well... that might be not quite true. Actually, it tracks the arity of every function -- defined again in a syntactic way as the number of arguments used to the left of the = sign when that function was defined. If you apply a function to more arguments than its arity, you might get multiple closures or something, I'm not sure.)

So that's pretty nice, and from that you might think that your test1 would then allocate one extra closure compared to test2. And you'd be right... when the optimizer isn't on.

But GHC also does lots of optimization stuff, and one of those is to notice "small" definitions and inline them. Almost certainly with optimizations turned on, your zipWithAdd and zipWithAddTo123 would both be inlined anywhere they were used, and we'd be back to the situation where just one closure gets allocated.

Hopefully this explanation gets you to where you can answer questions (1) and (2) yourself, but just in case it doesn't, here's explicit answers to those:

  1. Is passing one argument at the time as efficient as passing all arguments at once?

Maybe. It's possible that passing arguments one at a time will be converted via inlining to passing all arguments at once, and then of course they will be identical. In the absence of this optimization, passing one argument at a time has a (very slight) performance penalty compared to passing all arguments at once.

  1. Are those scenarios any different in terms of what Haskell is actually doing to compute test1 and test2?

test1 and test2 will almost certainly be compiled to the same code -- possibly even to the point that only one of them is compiled and the other is an alias for it.

If you want to read more about the ideas in the implementation, the Spineless Tagless G-machine paper is much more approachable than its title suggests, and only a little bit out of date.