1
votes

I have already imported spark.implicits._ But still I get error

Error:(27, 33) Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.

I have a case class like:

case class User(name: String, dept: String)

and I am converting Dataframe to dataset using:

val ds = df.map { row=> User(row.getString(0), row.getString(1) }

or

val ds = df.as[User]

Also, when I am trying the same code in Spark-shell I get no error, only when I run it through IntelliJ or submit the job I get this error.

Any reasons why?

1

1 Answers

4
votes

Moving declaration of case class out of scope did the trick!

Code structure will then be like:

package main.scala.UserAnalytics

// case class *outside* the main object
case class User(name: string, dept: String)

object UserAnalytics extends App {
    ...
    ds = df.map { row => User(row.getString(0), row.getString(1)) }
}