0
votes

I am trying to see if we can create new columns from value in one of the columns in a dataFrame using spark/scala. I have a dataframe with following data in it

df.show()

+---+-----------------------+
|id |allvals                |
+---+-----------------------+
|1  |col1,val11|col3,val31  |
|3  |col3,val33|col1,val13  |
|2  |col2,val22             |
+---+-----------------------+

In the above data col1/col2/col3 are the column names followed by it's value. Column name and value are separated by ,. Each set is separated by |.

Now, I want to achieve like this

+---+----------------------+------+------+------+
|id |allvals               |col1  |col2  |col3  |
+---+----------------------+------+------+------+
|1  |col1,val11|col3,val31 |val11 |null  |val31 |
|3  |col3,val33|col1,val13 |val13 |null  |val13 |
|2  |col2,val22            |null  |val22 |null  |
+---+----------------------+------+------+------+

Appreciate any help.

2

2 Answers

0
votes

You can transform the DataFrame using split, explode and groupBy/pivot/agg, as follows:

val df = Seq(
  (1, "col1,val11|col3,val31"),
  (2, "col3,val33|col1,val13"),
  (3, "col2,val22")
).toDF("id", "allvals")

import org.apache.spark.sql.functions._

df.withColumn("temp", split($"allvals", "\\|")).
  withColumn("temp", explode($"temp")).
  withColumn("temp", split($"temp", ",")).
  select($"id", $"allvals", $"temp".getItem(0).as("k"), $"temp".getItem(1).as("v")).
  groupBy($"id", $"allvals").pivot("k").agg(first($"v"))

// +---+---------------------+-----+-----+-----+
// |id |allvals              |col1 |col2 |col3 |
// +---+---------------------+-----+-----+-----+
// |1  |col1,val11|col3,val31|val11|null |val31|
// |3  |col2,val22           |null |val22|null |
// |2  |col3,val33|col1,val13|val13|null |val33|
// +---+---------------------+-----+-----+-----+
0
votes

You can convert column to Map with udf:

import org.apache.spark.sql.functions._
import spark.implicits._

val df = Seq(
  (1, "col1,val11|col3,val31"), (2, "col3,val33|col3,val13"), (2, "col2,val22")
).toDF("id", "allvals")

val to_map = udf((s: String) => s.split('|').collect { _.split(",") match {
  case Array(k, v) => (k, v)
}}.toMap )

val dfWithMap = df.withColumn("allvalsmap", to_map($"allvals"))
val keys = dfWithMap.select($"allvalsmap").as[Map[String, String]].flatMap(_.keys.toSeq).distinct.collect

keys.foldLeft(dfWithMap)((df, k) => df.withColumn(k, $"allvalsmap".getItem(k))).drop("allvalsmap").show
// +---+--------------------+-----+-----+-----+
// | id|             allvals| col3| col1| col2|
// +---+--------------------+-----+-----+-----+
// |  1|col1,val11|col3,v...|val31|val11| null|
// |  2|col3,val33|col3,v...|val13| null| null|
// |  2|          col2,val22| null| null|val22|
// +---+--------------------+-----+-----+-----+

Inspired by this answer by user6910411.