0
votes

I have a pyspark dataframe (df) with n cols, I would like to generate another df of n cols, where each column records the percentage difference b/w consecutive rows in the corresponding, original df column. And the column headers in the new df should be == corresponding column header in old dataframe + "_diff". With the following code I can generate the new columns of percentage changes for each column in the original df but am not able to stick them in a new df with suitable column headers:

from pyspark.sql import SparkSession
from pyspark.sql.window import Window
import pyspark.sql.functions as func

spark = (SparkSession
            .builder
            .appName('pct_change')
            .enableHiveSupport()
            .getOrCreate())

df = spark.createDataFrame([(1, 10, 11, 12), (2, 20, 22, 24), (3, 30, 33, 36)], 
                       ["index", "col1", "col2", "col3"])
w = Window.orderBy("index")

for i in range(1, len(df.columns)):
    col_pctChange = func.log(df[df.columns[i]]) - func.log(func.lag(df[df.columns[i]]).over(w))

Thanks

1

1 Answers

0
votes

In this case, you can do a list comprehension inside of a call to select.

To make the code a little more compact, we can first get the columns we want to diff in a list:

diff_columns = [c for c in df.columns if c != 'index']

Next select the index and iterate over diff_columns to compute the new column. Use .alias() to rename the resulting column:

df_diff = df.select(
    'index',
    *[(func.log(func.col(c)) - func.log(func.lag(func.col(c)).over(w))).alias(c + "_diff")
      for c in diff_columns]
)
df_diff.show()
#+-----+------------------+-------------------+-------------------+
#|index|         col1_diff|          col2_diff|          col3_diff|
#+-----+------------------+-------------------+-------------------+
#|    1|              null|               null|               null|
#|    2| 0.693147180559945| 0.6931471805599454| 0.6931471805599454|
#|    3|0.4054651081081646|0.40546510810816416|0.40546510810816416|
#+-----+------------------+-------------------+-------------------+