9
votes

Consider the following DataFrame:

#+------+---+
#|letter|rpt|
#+------+---+
#|     X|  3|
#|     Y|  1|
#|     Z|  2|
#+------+---+

which can be created using the following code:

df = spark.createDataFrame([("X", 3),("Y", 1),("Z", 2)], ["letter", "rpt"])

Suppose I wanted to repeat each row the number of times specified in the column rpt, just like in this question.

One way would be to replicate my solution to that question using the following pyspark-sql query:

query = """
SELECT *
FROM
  (SELECT DISTINCT *,
                   posexplode(split(repeat(",", rpt), ",")) AS (index, col)
   FROM df) AS a
WHERE index > 0
"""
query = query.replace("\n", " ")  # replace newlines with spaces, avoid EOF error
spark.sql(query).drop("col").sort('letter', 'index').show()
#+------+---+-----+
#|letter|rpt|index|
#+------+---+-----+
#|     X|  3|    1|
#|     X|  3|    2|
#|     X|  3|    3|
#|     Y|  1|    1|
#|     Z|  2|    1|
#|     Z|  2|    2|
#+------+---+-----+

This works and produces the correct answer. However, I am unable to replicate this behavior using the DataFrame API functions.

I tried:

import pyspark.sql.functions as f
df.select(
    f.posexplode(f.split(f.repeat(",", f.col("rpt")), ",")).alias("index", "col")
).show()

But this results in:

TypeError: 'Column' object is not callable

Why am I able to pass the column as an input to repeat within the query, but not from the API? Is there a way to replicate this behavior using the spark DataFrame functions?

1
f.expr("""repeat(",", rpt)""") instead of f.repeat(",", f.col("rpt"))?Alper t. Turker
@user8371915 df.select('*', f.expr('posexplode(split(repeat(",", rpt), ","))').alias("index", "col")).where('index > 0').drop("col").sort('letter', 'index').show() works. Do you know if this is the only way to use a Column as a parameter? Why does it work in the sql syntax?pault
@user8371915 please consider posting your suggestion as an answer (and it can be edited it out of my question). I think it would be beneficial to others in the future.pault

1 Answers

13
votes

One option is to use pyspark.sql.functions.expr, which allows you to use columns values as inputs to spark-sql functions.

Based on @user8371915's comment I have found that the following works:

from pyspark.sql.functions import expr

df.select(
    '*',
    expr('posexplode(split(repeat(",", rpt), ","))').alias("index", "col")
).where('index > 0').drop("col").sort('letter', 'index').show()
#+------+---+-----+
#|letter|rpt|index|
#+------+---+-----+
#|     X|  3|    1|
#|     X|  3|    2|
#|     X|  3|    3|
#|     Y|  1|    1|
#|     Z|  2|    1|
#|     Z|  2|    2|
#+------+---+-----+