4
votes

I am using the "Learn you a haskell tutorial" and have reached the type declarations section. I understand that they change the way GHCI gives you an error message, but do they also affect the way the actual function works? If not, is it essentially like a python function documentation written with """ """ underneath "def someFunction(x): "? - just an example

Example code:

removeNonUppercase :: [Char] -> [Char]  
removeNonUppercase st = [ c | c <- st, c `elem` ['A'..'Z']]

EDIT: I ask this because the tutorial explains that haskell is type-inferred at compile time.

3
Not exactly. I believe Haskell will check your type declarations against your code. Python is dynamically typed and does not do this.Kevin
I see. So despite determining type at compile time (without having to state the type like in java/C++), explicitly using type-declaration removes chances of compile errors? I'm quite new to haskell.Byte
It's my understanding that Haskell code can get quite abstruse, and sometimes the compiler will not be able to infer the function's type. Even if it can, it is helpful to human readers, who may be less adept at the kind of abstract reasoning required for type inference. In this case, it is a form of documentation, but the compiler enforces correctness, unlike (say) comments, which can and do lie.Kevin
Thanks. That cleared some of it up. Will it change how the function operates as well? And hopefully someone will give an answer to that with some example code. :) EDIT: Nevermind, I just re-read the tutorial and it seems that it doesn't actually affect the function in the sense I had worried it might. Thank you for your explanation. @KevinByte
@Kevin It doesn't have to get abstruse. Consider the data type data NestedList a = Epsilon | Nested a (NestedList [a]) that represents nested lists. The length function is easily definable: length Epsilon = 0; length (Nested _ xs) = 1 + length xs. However GHC is unable to infer the correct type (it will yield an error). The problem is that the function contains polymorphic recursion, i.e. we are using length over the type NestedList [a] while defining it for the type NestedList a and so a and [a] don't match. Providing the type signature makes the code compilable.Bakuriu

3 Answers

8
votes

Signatures aren't just for documentation (even though they are very useful for that as well). They are enforced by the compiler, which means that by adding signatures you can make the types of your functions more restrictive than they would be otherwise. Toy example:

add x y = x + y

addInt :: Int -> Int -> Int
addInt x y = x + y
*Main> :t add
add :: Num a => a -> a -> a
*Main> add 2 3
5
*Main> add 2.1 3.1
5.2
*Main> :t addInt
addInt :: Int -> Int -> Int
*Main> addInt 2 3
5
*Main> addInt 2.1 3.1 -- addInt will not accept non-Ints.

<interactive>:23:8:
    No instance for (Fractional Int) arising from the literal ‘2.1’
    In the first argument of ‘addInt’, namely ‘2.1’
    In the expression: addInt 2.1 3.1
    In an equation for ‘it’: it = addInt 2.1 3.1

Besides that, adding type signatures means you will get better (i.e. easier to understand) errors in tricky situations, as the compiler will know what you want to achieve rather than having to guess everything on its own.

There are also situations in which the compiler can't decide the types without the help of some signatures or other type annotations. Perhaps the simplest example is:

readAndShow s = show (read s)

If you try to use that without specifying any types...

Foo.hs:6:17:
    No instance for (Show a0) arising from a use of ‘show’
    The type variable ‘a0’ is ambiguous
    Note: there are several potential instances:
      instance (GHC.Arr.Ix a, Show a, Show b) => Show (GHC.Arr.Array a b)
        -- Defined in ‘GHC.Arr’
      instance Show a => Show (Maybe a) -- Defined in ‘GHC.Show’
      instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
        -- Defined in ‘GHC.Real’
      ...plus 26 others
    In the expression: show (read s)
    In an equation for ‘readAndShow’: readAndShow s = show (read s)

Foo.hs:6:23:
    No instance for (Read a0) arising from a use of ‘read’
    The type variable ‘a0’ is ambiguous
    Note: there are several potential instances:
      instance (GHC.Arr.Ix a, Read a, Read b) => Read (GHC.Arr.Array a b)
        -- Defined in ‘GHC.Read’
      instance Read a => Read (Maybe a) -- Defined in ‘GHC.Read’
      instance (Integral a, Read a) => Read (GHC.Real.Ratio a)
        -- Defined in ‘GHC.Read’
      ...plus 25 others
    In the first argument of ‘show’, namely ‘(read s)’
    In the expression: show (read s)
    In an equation for ‘readAndShow’: readAndShow s = show (read s)
Failed, modules loaded: none.

... it won't work. read converts a String to some type, and show does the opposite. However, if nothing specifies the type of read s, the compiler can't tell which type you want to read the String as. So you either need to specify the intermediate type...

readAndShowAsInt s = show (read s :: Int)
*Main> readAndShowAsInt "2"
"2"

... Or have something else pick the type for you:

readAndAdd :: String -> Int -> Int
readAndAdd s y = read s + y
*Main> readAndAdd "2" 3
5
4
votes

In simple cases, the type declaration is just the same type that Haskell would infer, and is just documentation + help against confusing errors. However there are exceptions:

(1) You can give a more restricted type declaration than the inferred one. E.g.

squareInt :: Int -> Int
squareInt x = x*x

Without the declaration, that would be inferred as Num t => t -> t instead. But with it, that function can only be used for Ints, not for any numerical type. This is sometimes useful for limiting functions to an intended use case, or for preventing type ambiguity errors.

(2) Many language extensions require a type declaration because they go beyond the part of the type system that the inference algorithm supports. For example, rank n types:

f :: (forall x. Show x => y) -> (Int, Bool) -> (y, y)
f g (i, b) = (g i, g b)

This includes also the Standard Haskell feature "polymorphic recursion", which means calling a function recursively with a different type than it was originally called with.

2
votes

Type declarations are tools for you as the programmer to sanity check your code. The type inference Haskell does will always be correct - but it might not be what you expected when you wrote that code. You may have expected a function with type Int -> Int -> Int, but instead your code may tell the inference algorithm that the function is type (Int, Int) -> Int because you forgot a parentheses somewhere.

They also act as a form of documentation for the programmer. Sometimes it's very helpful to see a function's type while writing your code. Often (if functions are well named), you can figure out what a function is supposed to do just from looking at its name and its type. Documentation is still very useful, but as you start writing and reading more functional programs, types become something you use to help you understand code.