0
votes

Using scala-spark, I read a table in postgres and formed a dataframe: locationDF which contains data related to locations in the below format.

val opts = Map("url" -> "databaseurl","dbtable" -> "locations")
val locationDF = spark.read.format("jdbc").options(opts).load()
locationDF.printSchema()
root
 |-- locn_id: integer (nullable = true)
 |-- start_date: string (nullable = true)
 |-- work_min: double (nullable = true)
 |-- coverage: double (nullable = true)
 |-- speed: double (nullable == true)

Initial Data:

+-------------+----------+-------------------+----------------+------------------+
|      locn_id|start_date|           work_min|        coverage|             speed|
+-------------+----------+-------------------+----------------+------------------+
|            3|2012-02-22|  53.62948333333333|          13.644|3.9306276263070457|
|            7|2012-02-22|0.11681666666666667|             0.0|               0.0|
|            1|2012-02-21| 22.783333333333335|             2.6| 8.762820512820513|
|            1|2012-01-21| 23.033333333333335|             2.6|  8.85897435897436|
|            1|2012-01-21|  44.98533333333334|            6.99| 6.435670004768718|
|            4|2012-02-21| 130.34788333333333|           54.67| 2.384267117858667|
|            2|2012-01-21|           94.61035|           8.909|10.619637445280052|
|            1|2012-02-21|                0.0|             0.0|               0.0|
|            1|2012-02-21|            29.3377|           4.579| 6.407010264249837|
|            1|2012-01-21|  59.13276666666667|           8.096| 7.303948451910409|
|            2|2012-03-21| 166.41843333333333|          13.048|12.754325056202738|
|            1|2012-03-21| 14.853183333333334|           2.721| 5.458722283474213|
|            9|2012-03-21|            1.69895|           0.845|2.0105917159763314|
+-------------+----------+-------------------+----------------+------------------+

I am trying to perform the sum of work_min (and convert into hours), sum of coverage, average speed of that particular year and month and form another dataframe. To do that, I have seperated the month and year from the date column: start_date as below and got two columns: year and month out of it.

locationDF.withColumn("year", date_format(to_date($"start_date"), "yyyy").cast(("Integer"))).withColumn("month", date_format(to_date($"start_date"), "MM").cast(("Integer")))

+-------------+----------+-------------------+----------------+------------------+----+-----+
|      locn_id|start_date|           work_min|        coverage|             speed|year|month|
+-------------+----------+-------------------+----------------+------------------+----+-----+
|            3|2012-02-22|  53.62948333333333|          13.644|3.9306276263070457|2012|    2|
|            7|2012-02-22|0.11681666666666667|             0.0|               0.0|2012|    2|
|            1|2012-02-21| 22.783333333333335|             2.6| 8.762820512820513|2012|    2|
|            1|2012-01-21| 23.033333333333335|             2.6|  8.85897435897436|2012|    1|
|            1|2012-01-21|  44.98533333333334|            6.99| 6.435670004768718|2012|    1|
|            4|2012-02-21| 130.34788333333333|           54.67| 2.384267117858667|2012|    2|
|            2|2012-01-21|           94.61035|           8.909|10.619637445280052|2012|    1|
|            1|2012-02-21|                0.0|             0.0|               0.0|2012|    2|
|            1|2012-02-21|            29.3377|           4.579| 6.407010264249837|2012|    2|
|            1|2012-01-21|  59.13276666666667|           8.096| 7.303948451910409|2012|    1|
|            2|2012-03-21| 166.41843333333333|          13.048|12.754325056202738|2012|    3|
|            1|2012-03-21| 14.853183333333334|           2.721| 5.458722283474213|2012|    3|
|            9|2012-03-21|            1.69895|           0.845|2.0105917159763314|2012|    3|
+-------------+----------+-------------------+----------------+------------------+----+-----+

But I dont understand how to perform an aggregation -> sum on two separate columns: work_hours & coverage, average value of the column: speed for that particular month all at the same time and obtain the result as below.

+----+-----+-------------+------------+-----------------+
|year|month|sum_work_mins|sum_coverage|        avg_speed|
+----+-----+-------------+------------+-----------------+
|2012|    1|221.7617833  |  26.595    |11.07274342031118|
|2012|    2|236.2152166  |  75.493    |7.161575173745354|
|2012|    3|182.9705666  |  16.614    |6.741213018551094|
+----+-----+-------------+------------+-----------------+

Could anyone let me know how can I achieve that ?

1

1 Answers

0
votes

I think you are looking for this

scala> dfd.groupBy("year","month").agg(sum("work_min").as("sum_work_min"),sum("coverage").as("sum_coverage"),avg("speed").as("avg_speed")).show
+----+-----+------------------+------------------+-----------------+
|year|month|      sum_work_min|      sum_coverage|        avg_speed|
+----+-----+------------------+------------------+-----------------+
|2012|    1|221.76178333333334|26.595000000000002|8.304557565233385|
|2012|    2| 236.2152166666667|            75.493|3.580787586872677|
|2012|    3|182.97056666666666|            16.614|6.741213018551094|
+----+-----+------------------+------------------+-----------------+

hope it helps you.