0
votes

Spark: 2.4

The dataframe contain average loginhour for each employee

AverageLoginHour|employee
3.392265193     |emp_1
2.833333333     |emp_2
5.638888889     |emp_3
6.909090909     |emp_4
7.361445783     |emp_5

Code:

tds.select("Employee","AverageLoginHour")
    (count("AverageLoginHour").alias("logincnt"))
    (sum("AverageLoginHour").alias("loginsum"))
      .withColumn("TotalEmployeeavg",col("loginsum")/col("logincnt")*100)

Error: Cannot resolve symbol .withcolumn

Expected Output:

AverageLoginHour|   employee    Totalavg|Remarks
3.392265193     |    Emp_1      |5.2    |Below Avg
2.833333333     |    Emp_2      |5.2    |Below Avg
5.638888889     |    Emp_3      |5.2    |Above Avg
6.909090909     |    Emp_4      |5.2    |Above Avg
7.361445783     |    Emp_5      |5.2    |Above Avg

If employee AverageLoginHour is less than Totalavg than .withcolumn Remarks as Below Avg else Above Avg.

Please share your suggestion.

1

1 Answers

1
votes

Use avg in built function with window clause for this case.

Example:

df.show()
//+----------------+--------+
//|AverageLoginHour|employee|
//+----------------+--------+
//|     3.392265193|   emp_1|
//|     2.833333333|   emp_2|
//|     5.638888889|   emp_3|
//|     6.909090909|   emp_4|
//|     7.361445783|   emp_5|
//+----------------+--------+


df.withColumn("Totalavg",avg(col("AverageLoginHour")).over()).
withColumn("Remarks",when(col("Totalavg") > col("AverageLoginHour"),lit("Below Avg")).otherwise(lit("Above Avg"))).
show()

//+----------------+--------+------------+---------+
//|AverageLoginHour|employee|    Totalavg|  Remarks|
//+----------------+--------+------------+---------+
//|     3.392265193|   emp_1|5.2270048214|Below Avg|
//|     2.833333333|   emp_2|5.2270048214|Below Avg|
//|     5.638888889|   emp_3|5.2270048214|Above Avg|
//|     6.909090909|   emp_4|5.2270048214|Above Avg|
//|     7.361445783|   emp_5|5.2270048214|Above Avg|
//+----------------+--------+------------+---------+

//rounding to 1
df.withColumn("Totalavg",round(avg(col("AverageLoginHour")).over(),1)).withColumn("Remarks",when(col("Totalavg") > col("AverageLoginHour"),lit("Below Avg")).otherwise(lit("Above Avg"))).show()
//+----------------+--------+--------+---------+
//|AverageLoginHour|employee|Totalavg|  Remarks|
//+----------------+--------+--------+---------+
//|     3.392265193|   emp_1|     5.2|Below Avg|
//|     2.833333333|   emp_2|     5.2|Below Avg|
//|     5.638888889|   emp_3|     5.2|Above Avg|
//|     6.909090909|   emp_4|     5.2|Above Avg|
//|     7.361445783|   emp_5|     5.2|Above Avg|
//+----------------+--------+--------+---------+

Another way would be without using window function and leveraging crossJoin.

Example:

val df1=df.selectExpr("avg(AverageLoginHour) as Totalavg")
df.crossJoin(df1).
withColumn("Remarks",when(col("Totalavg") > col("AverageLoginHour"),lit("Below Avg")).otherwise(lit("Above Avg"))).
show()
//+----------------+--------+------------+---------+
//|AverageLoginHour|employee|    Totalavg|  Remarks|
//+----------------+--------+------------+---------+
//|     3.392265193|   emp_1|5.2270048214|Below Avg|
//|     2.833333333|   emp_2|5.2270048214|Below Avg|
//|     5.638888889|   emp_3|5.2270048214|Above Avg|
//|     6.909090909|   emp_4|5.2270048214|Above Avg|
//|     7.361445783|   emp_5|5.2270048214|Above Avg|
//+----------------+--------+------------+---------+