0
votes

I have a pyspark dataframe with the following schema

+----------+-------------------+-----------------------------------+------------------+
|      date|         numeric_id|                     feature_column|              city|
+----------+-------------------+-----------------------------------+------------------+
|2017-08-01|         2343434545|               [0.0, 0.0, 0.0, 0...|            Berlin|
|2017-08-01|         2343434545|               [0.0, 0.0, 0.0, 0...|              Rome|
|2017-08-01|         2343434545|               [0.0, 0.0, 0.0, 0...|           NewYork|
|2017-08-01|         2343434545|               [0.0, 0.0, 0.0, 0...|           Beijing|
|2019-12-01|         6455534545|               [0.0, 0.0, 0.0, 0...|            Berlin|
|2019-12-01|         6455534545|               [0.0, 0.0, 0.0, 0...|              Rome|
|2019-12-01|         6455534545|               [0.0, 0.0, 0.0, 0...|           NewYork|
|2019-12-01|         6455534545|               [0.0, 0.0, 0.0, 0...|           Beijing|
+----------+-------------------+-----------------------------------+------------------+

I want to pivot the dataframe so that I can have each feature_column x city as a new column, grouped by date and numeric_id. The output dataframe should look like

+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
|      date|   numeric_id| feature_column_Berlin| feature_column_Rome| feature_column_NewYork|feature_column_Beijing|
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
|2017-08-01|   2343434545|  [0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...   |[0.0, 0.0, 0.0, 0...  |
|2019-12-01|   6455534545|  [0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...   |[0.0, 0.0, 0.0, 0...  |
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+

This is different from the question posted on pivoting strings Pivot String column on Pyspark Dataframe since I am dealing with ArrayType columns. I'm thinking it would be easier to implement it in Pandas (but handling ArrayType columns will be tricky), so am curious about how to do it using spark SQL. Any suggestions?

1

1 Answers

0
votes
//Initially I am creating the sample data to load the data in dataframe.
import org.apache.spark.sql.functions._
val df = Seq(("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"Berlin"),("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"Rome"),("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"NewYork"),("2017-08-01","2343434545",Array("0.0","0.0","0.0","0.0"),"Beijing"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"Berlin"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"Rome"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"NewYork"),("2019-12-01","6455534545",Array("0.0","0.0","0.0","0.0"),"Beijing"))
.toDF("date","numeric_id","feature_column","city")

df.groupBy("date","numeric_id").pivot("city")
 .agg(collect_list("feature_column"))
.withColumnRenamed("Beijing","feature_column_Beijing")
.withColumnRenamed("Berlin","feature_column_Berlin")
.withColumnRenamed("NewYork","feature_column_NewYork")
.withColumnRenamed("Rome","feature_column_Rome").show()

You can see the output as below :

enter image description here