7
votes

Haskell pattern matching is often head strict, for example,f (x:xs) = ... requires input list to be evaluated to (thunk : thunk). But sometimes such evaluation is not needed and function can afford to be non-strict on some arguments, for example f (x:xs) = 3.

Ideally, in such situations we could avoid evaluating arguments to get the behaviour of const 3, which could be done with irrefutable pattern: f ~(x:xs) = 3. This gives us performance benefits and greater error tolerance.

My question is: Does GHC already implement such transformations via some kind of strictness analysis? Appreciate it if you could also point me to some readings on it.

1
I don't know for certain but I'm pretty sure it does not try to do that. Irrefutable patterns exist for a reason. that reason being the intended difference in semantics between the two definitions you showed, just as you've described them. AFAIK strictness analysis seeks to reduce laziness and perform the unavoidable computations earlier, if it can be proved it won't change the semantics. But these are just hunches and impressions that I have. Perhaps a close reading of the Report will provide more definitive answers.Will Ness
If we bring this reasoning to its extreme, we should also "optimize" seq x y to y, which would defeat the purpose of seq. A strict pattern, in principle, could be used as a special case of seq.chi
The ~ changes the semantics of the function, so ghc doesn’t do that.augustss

1 Answers

5
votes

As far as I know, GHC will never make something more lazy than specified by the programmer, even if it can prove that doesn't change the semantics of the term. I don't think there is any fundamental reason to avoid changing the laziness of a term when we can prove the semantics don't change; I suspect it's more of an empirical observation that we don't know of any situations where that would be a really great idea. (And if a transformation would change the semantics, I would consider it a bug for GHC to make that change.)

There is only one possible exception that comes to mind, the so-called "full laziness" transformation, described well on the wiki. In short, GHC will translate

\a b -> let c = {- something that doesn't mention b -} in d

to

\a -> let c = {- same thing as before -} in \b -> d

to avoid recomputing c each time the argument is applied to a new b. But it seems to me that this transformation is more about memoization than about laziness: the two terms above appear to me to have the same (denotational) semantics wrt laziness/strictness, and are only operationally different.