0
votes

I have a StructType column in my Spark DataFrame that I want to split into multiple columns.

case class Struct(FIELD_1: Int, FIELD_2: Int, FIELD_3: Int)

val df = Seq(
    (Struct(1,2,3), 2),
    (Struct(4,5,6), 3)
).toDF("col0", "col1")

df.show()
// df: org.apache.spark.sql.DataFrame = [col0: struct<FIELD_1: int, 
// FIELD_2: int ... 1 more field>, col1: int]
// +---------+----+
// |     col0|col1|
// +---------+----+
// |[1, 2, 3]|   2|
// |[4, 5, 6]|   3|
// +---------+----+

One way to split it to its constituent components is to use the .* operator. Simply:

df.select("col0.*", "col1").show()
// +-------+-------+-------+----+
// |FIELD_1|FIELD_2|FIELD_3|col1|
// +-------+-------+-------+----+
// |      1|      2|      3|   2|
// |      4|      5|      6|   3|
// +-------+-------+-------+----+

However, if I would want to apply some UDF myUDF on the column first that also returns a struct, it becomes inconvenient to use the .* method. Is there a flattenStruct-esque method or function that allows me to do this?

df.select(flattenStruct(myUDF($"col0")), "col1") 
1

1 Answers

0
votes

You can first apply the UDF in first DF and then select using * in next select? df.select(myUDF($"col0").as("col0"), "col1").select($"col0.*", "col1")