I have the following case class:
case class Person(name: String, lastname: Option[String] = None, age: BigInt) {}
And the following json:
{ "name": "bemjamin", "age" : 1 }
When I try to transform my dataframe into a dataset:
spark.read.json("example.json")
.as[Person].show()
It shows me the following error:
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '
lastname
' given input columns: [age, name];
My question is: If my schema is my case class and it defines that the lastname is optional, shouldn't the as() do the conversion?
I can easily fix this using a .map but I would like to know if there is another cleaner alternative to this.