I have a pyspark dataframe with the following schema
+----------+-------------------+-----------------------------------+------------------+
| date| numeric_id| feature_column| city|
+----------+-------------------+-----------------------------------+------------------+
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...| Berlin|
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...| Rome|
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...| NewYork|
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...| Beijing|
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...| Berlin|
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...| Rome|
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...| NewYork|
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...| Beijing|
+----------+-------------------+-----------------------------------+------------------+
I want to pivot the dataframe so that I can have each feature_column
x city
as a new column, grouped by date
and numeric_id
. The output dataframe should look like
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
| date| numeric_id| feature_column_Berlin| feature_column_Rome| feature_column_NewYork|feature_column_Beijing|
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
|2017-08-01| 2343434545| [0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0... |[0.0, 0.0, 0.0, 0... |
|2019-12-01| 6455534545| [0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0...|[0.0, 0.0, 0.0, 0... |[0.0, 0.0, 0.0, 0... |
+----------+-------------+----------------------+--------------------+-----------------------+----------------------+
This is different from the question posted on pivoting strings Pivot String column on Pyspark Dataframe since I am dealing with ArrayType columns. I'm thinking it would be easier to implement it in Pandas (but handling ArrayType columns will be tricky), so am curious about how to do it using spark SQL. Any suggestions?