You can use from_json function the get the json fields with slight modification on values as below
data = [
('[{"@id":"Party_1","@ObjectID":"Policy_1"},{"@id":"Party_2","@ObjectID":"Policy_2"},{"@id":"Party_3","@ObjectID":"Policy_3"}]', 2767),
('[{"@id":"Party_1","@ObjectID":"Policy_1"},{"@id":"Party_2","@ObjectID":"Policy_2"},{"@id":"Party_3","@ObjectID":"Policy_3"}]', 4235)
]
df = spark.createDataFrame(data).toDF(*["value", "count"])\
.withColumn("value", f.regexp_replace(f.col("value"), "\\[\\{", "{\"arr\": [{"))\
.withColumn("value", f.regexp_replace(f.col("value"), "\\}\\]", "}]}"))
json_schema = spark.read.json(df.rdd.map(lambda row: row.value)).schema
resultDF = df.select(f.from_json("value",
schema=json_schema).alias("array_col"))\
.select("array_col.*")
resultDF.printSchema()
resultDF.show(truncate=False)
Or you can use custom schema if you want nested json as string.
Output Schema:
root
|-- arr: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- @ObjectID: string (nullable = true)
| | |-- @id: string (nullable = true)
Output:
+---------------------------------------------------------------+
|arr |
+---------------------------------------------------------------+
|[{Policy_1, Party_1}, {Policy_2, Party_2}, {Policy_3, Party_3}]|
|[{Policy_1, Party_1}, {Policy_2, Party_2}, {Policy_3, Party_3}]|
+---------------------------------------------------------------+