I saw a Dataframes tutorial at https://databricks.com/blog/2015/02/17/introducing-dataframes-in-spark-for-large-scale-data-science.html which is written in Python. I am trying to translate it into Scala.
They have the following code:
df = context.load("/path/to/people.json")
# RDD-style methods such as map, flatMap are available on DataFrames
# Split the bio text into multiple words.
words = df.select("bio").flatMap(lambda row: row.bio.split(" "))
# Create a new DataFrame to count the number of words
words_df = words.map(lambda w: Row(word=w, cnt=1)).toDF()
word_counts = words_df.groupBy("word").sum()
So, I first read the data from a csv into a dataframe df and then I have:
val title_words = df.select("title").flatMap { row =>
row.getAs[String("title").split(" ") }
val title_words_df = title_words.map( w => Row(w,1) ).toDF()
val word_counts = title_words_df.groupBy("word").sum()
but I don't know:
how to assign the field names to the rows in the line beginning with val title_words_df = ...
I am having the error "The value toDF is not a member of org.apache.spark.rdd.RDD[org.apache.spark.sql.Row]"
Thanks in advance for the help.