3
votes

I have one dataframe:

val groupby = df.groupBy($"column1",$"Date")    
.agg(sum("amount").as("amount"))
.orderBy($"column1",desc("cob_date"))

When applyin the window function for adding new column difference:

val windowspec= Window.partitionBy("column1").orderBy(desc("DATE"))

groupby.withColumn("diffrence" ,lead($"amount", 1,0).over(windowspec)).show()


+--------+------------+-----------+--------------------------+
| Column | Date       | Amount    | Difference               |
+--------+------------+-----------+--------------------------+
| A      | 3/31/2017  | 12345.45  | 3456.540000000000000000  |
+--------+------------+-----------+--------------------------+
| A      | 2/28/2017  | 3456.54   | 34289.430000000000000000 |
+--------+------------+-----------+--------------------------+
| A      | 1/31/2017  | 34289.43  | 45673.987000000000000000 |
+--------+------------+-----------+--------------------------+
| A      | 12/31/2016 | 45673.987 | 0.00E+00                 |
+--------+------------+-----------+--------------------------+

I'm getting decimal as with trailing zeros .When I did printSchema() for the above dataframe getting the datatype for difference: decimal(38,18).Can some one tell me how to change the datatype to decimal(38,2) or remove the trailing zeros

3

3 Answers

3
votes

You can cast the data with the specific decimal size like below,

lead($"amount", 1,0).over(windowspec).cast(DataTypes.createDecimalType(32,2))
0
votes

In pure SQL, you can use the well known technique:

SELECT ceil(100 * column_name_double)/100 AS cost ...
0
votes
from pyspark.sql.types import DecimalType
df=df.withColumn(column_name, df[column_name].cast(DecimalType(10,2)))