I recently started experimenting with both Spark and Java. I initially went through the famous WordCountexample using RDD and everything went as expected. Now I am trying to implement my own example but using DataFrames and not RDDs.
So I am reading a dataset from a file with
DataFrame df = sqlContext.read()
.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("delimiter", ";")
.option("header", "true")
.load(inputFilePath);
and then I try to select a specific column and apply a simple transformation to every row like that
df = df.select("start")
.map(text -> text + "asd");
But the compilation finds a problem with the second row which I don't fully understand (The start column is inferred as of type string).
Multiple non-overriding abstract methods found in interface scala.Function1
Why is my lambda function treated as a Scala function and what does the error message actually mean?