0
votes

I have a DataFrame in PySpark with a column of URI query-string (StringType) like this:

+--------------+ 
| cs_uri_query |
+--------------+
| a=1&b=2&c=3  |
+--------------+
| d&e=&f=4     |
+--------------+

I need to convert this column in an ArrayType of StructField elements with the following structure:

ArrayType(StructType([StructField('key', StringType(), nullable=False),
                      StructField('value', StringType(), nullable=True)]))

My expected column is like this:

+------------------------------------------------------------+ 
| cs_uri_query                                               |
+------------------------------------------------------------+
| [{key=a, value=1},{key=b, value=2},{key=c, value=3}]       |
+------------------------------------------------------------+
| [{key=d, value=null},{key=e, value=null},{key=f, value=4}] |
+------------------------------------------------------------+

UDF is the only way i found to achieve this. I'm using pure Spark functions and, if it is possible, i would like to avoid UDFs... UDFs have very bad performance on PySpark, unlike using Spark on Scala lang.

This is my code using UDF:

def parse_query(query):
    args = None
    if query:
        args = []
        for arg in query.split("&"):
            if arg:
                if "=" in arg:
                    a = arg.split("=")
                    if a[0]:
                        v = a[1] if a[1] else None
                        args.append({"key": a[0], "value": v})
                else:
                    args.append({"key": arg, "value": None})
    return args

uri_query = ArrayType(StructType([StructField('key', StringType(), nullable=True),
                                  StructField('value', StringType(), nullable=True)]))

udf_parse_query = udf(lambda args: parse_query(args), uri_query)

df = df.withColumn("cs_uri_query", udf_parse_query(df["cs_uri_query"]))

Someone able to open my eyes me with an amazing solution ?

1

1 Answers

1
votes

For Spark 2.4+, you can split by & and then use transform function to convert each element key=value to a struct(key, value):

from pyspark.sql.functions import expr

df = spark.createDataFrame([("a=1&b=2&c=3",), ("d&e=&f=4",)], ["cs_uri_query"])

transform_expr = """transform(split(cs_uri_query, '&'),
                 x -> struct(split(x, '=')[0] as key, split(x, '=')[1] as value)
                 )
                 """

df.withColumn("cs_uri_query", expr(transform_expr)).show(truncate=False)

#+------------------------+
#|cs_uri_query            |
#+------------------------+
#|[[a, 1], [b, 2], [c, 3]]|
#|[[d,], [e, ], [f, 4]]   |
#+------------------------+

EDIT

If you want to filter out keys that are null or empty then you can use filter along with the above transform expression:

transform_expr = """filter(transform(split(cs_uri_query, '&'),
                                     x -> struct(split(x, '=')[0] as key, split(x, '=')[1] as value)
                           ),
                           x -> ifnull(x.key, '') <> ''
                    )
                 """