I have came across one issue is Spark sql aggregation. I have one dataframe from which I'm loading records from apache phoenix.
val df = sqlContext.phoenixTableAsDataFrame(
Metadata.tables(A.Test), Seq("ID", "date", "col1", "col2","col3"),
predicate = Some("\"date\" = " + date), zkUrl = Some(zkURL))
In another dataframe I need to aggregate on the basis of ID and date and then sum col1, col2, col3, i.e.
val df1 = df.groupBy($"ID", $"date").agg(
sum($"col1" + $"col2" + $"col3").alias("col4"))
But I'm getting incorrect result while doing the sum. How we can sum all the columns (col1, col2, col3) and assign it to col4?
Example:
Suppose if data is like this:
ID,date,col1,col2,col3
1,2017-01-01,5,10,12
2,2017-01-01,6,9,17
3,2017-01-01,2,3,7
4,2017-01-01,5,11,13
Expected output:
ID,date,col4
1,2017-01-01,27
2,2017-01-01,32
3,2017-01-01,12
4,2017-01-01,29