95
votes

In Scala, we can use at least two methods to retrofit existing or new types. Suppose we want to express that something can be quantified using an Int. We can define the following trait.

Implicit conversion

trait Quantifiable{ def quantify: Int }

And then we can use implicit conversions to quantify e.g. Strings and Lists.

implicit def string2quant(s: String) = new Quantifiable{ 
  def quantify = s.size 
}
implicit def list2quantifiable[A](l: List[A]) = new Quantifiable{ 
  val quantify = l.size 
}

After importing these, we can call the method quantify on strings and lists. Note that the quantifiable list stores its length, so it avoids the expensive traversal of the list on subsequent calls to quantify.

Type classes

An alternative is to define a "witness" Quantified[A] that states, that some type A can be quantified.

trait Quantified[A] { def quantify(a: A): Int }

We then provide instances of this type class for String and List somewhere.

implicit val stringQuantifiable = new Quantified[String] {
  def quantify(s: String) = s.size 
}

And if we then write a method that needs to quantify its arguments, we write:

def sumQuantities[A](as: List[A])(implicit ev: Quantified[A]) = 
  as.map(ev.quantify).sum

Or using the context bound syntax:

def sumQuantities[A: Quantified](as: List[A]) = 
  as.map(implicitly[Quantified[A]].quantify).sum

But when to use which method?

Now comes the question. How can I decide between those two concepts?

What I have noticed so far.

type classes

  • type classes allow the nice context bound syntax
  • with type classes I don't create a new wrapper object on each use
  • the context bound syntax does not work anymore if the type class has multiple type parameters; imagine I want to quantify things not only with integers but with values of some general type T. I would want to create a type class Quantified[A,T]

implicit conversion

  • since I create a new object, I can cache values there or compute a better representation; but should I avoid this, since it might happen several times and an explicit conversion would probably be invoked only once?

What I expect from an answer

Present one (or more) use case(s) where the difference between both concepts matters and explain why I would prefer one over the other. Also explaining the essence of the two concepts and their relation to each other would be nice, even without example.

3
There's some confusion in the type class points where you mention "view bound", though type classes use context bounds.Daniel C. Sobral
+1 excellent question; I'm very interested in a thorough answer to this.Dan Burton
@Daniel Thank you. I always get those wrong.ziggystar
You're mistaken in one place: in your second implicit conversion example you store the size of a list in a value and say that it avoids the expensive traversal of the list on subsequent calls to quantify, but on your every call to the quantify the list2quantifiable gets triggered all over again thus reinstantiating the Quantifiable and recalculating the quantify property. What I'm saying is that there is actually no way to cache the results with implicit conversions.Nikita Volkov
@NikitaVolkov Your observation is right. And I adress this in my question in the second to last paragraph. The caching works, when the converted object gets used for longer after one converting method call (and maybe is passed on in its converted form). While type classes would probably get chained along the unconverted object when going deeper.ziggystar

3 Answers

42
votes

While I don't want to duplicate my material from Scala In Depth, I think it's worth noting that type classes / type traits are infinitely more flexible.

def foo[T: TypeClass](t: T) = ...

has the ability to search its local environment for a default type class. However, I can override default behavior at any time by one of two ways:

  1. Creating/importing an implicit type class instance in Scope to short-circuit implicit lookup
  2. Directly passing a type class

Here's an example:

def myMethod(): Unit = {
   // overrides default implicit for Int
   implicit object MyIntFoo extends Foo[Int] { ... }
   foo(5)
   foo(6) // These all use my overridden type class
   foo(7)(new Foo[Int] { ... }) // This one needs a different configuration
}

This makes type classes infinitely more flexible. Another thing is that type classes / traits support implicit lookup better.

In your first example, if you use an implicit view, the compiler will do an implicit lookup for:

Function1[Int, ?]

Which will look at Function1's companion object and the Int companion object.

Notice that Quantifiable is nowhere in the implicit lookup. This means you have to place the implicit view in a package object or import it into scope. It's more work to remember what's going on.

On the other hand, a type class is explicit. You see what it's looking for in the method signature. You also have an implicit lookup of

Quantifiable[Int]

which will look in Quantifiable's companion object and Int's companion object. Meaning that you can provide defaults and new types (like a MyString class) can provide a default in their companion object and it will be implicitly searched.

In general, I use type classes. They are infinitely more flexible for the initial example. The only place I use implicit conversions is when using an API layer between a Scala wrapper and a Java library, and even this can be 'dangerous' if you're not careful.

20
votes

One criterion that can come into play is how you want the new feature to "feel" like; using implicit conversions, you can make it look like it is just another method:

"my string".newFeature

...while using type classes it will always look like it you are calling an external function:

newFeature("my string")

One thing that you can achieve with type classes and not with implicit conversions is adding properties to a type, rather than to an instance of a type. You can then access these properties even when you do not have an instance of the type available. A canonical example would be:

trait Default[T] { def value : T }

implicit object DefaultInt extends Default[Int] {
  def value = 42
}

implicit def listsHaveDefault[T : Default] = new Default[List[T]] {
  def value = implicitly[Default[T]].value :: Nil
}

def default[T : Default] = implicitly[Default[T]].value

scala> default[List[List[Int]]]
resN: List[List[Int]] = List(List(42))

This example also shows how the concepts are tightly related: type classes would not be nearly as useful if there were no mechanism to produce infinitely many of their instances; without the implicit method (not a conversion, admittedly), I could only have finitely many types have the Default property.

13
votes

You can think of the difference between the two techniques by analogy to function application, just with a named wrapper. For example:

trait Foo1[A] { def foo(a: A): Int }  // analogous to A => Int
trait Foo0    { def foo: Int }        // analogous to Int

An instance of the former encapsulates a function of type A => Int, whereas an instance of the latter has already been applied to an A. You could continue the pattern...

trait Foo2[A, B] { def foo(a: A, b: B): Int } // sort of like A => B => Int

thus you could think of Foo1[B] sort of like the partial application of Foo2[A, B] to some A instance. A great example of this was written up by Miles Sabin as "Functional Dependencies in Scala".

So really my point is that, in principle:

  • "pimping" a class (through implicit conversion) is the "zero'th order" case...
  • declaring a typeclass is the "first order" case...
  • multi-parameter typeclasses with fundeps (or something like fundeps) is the general case.