15
votes

I am trying to parse date using to_date() but I get the following exception.

SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: Fail to parse '12/1/2010 8:26' in the new parser. You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string.

The exception suggests I should use a legacy Time Parser, for starter I don't know how to set it to Legacy.

Here is my implementation

dfWithDate = df.withColumn("date", to_date(col("InvoiceDate"), "MM/dd/yyyy"))

my date is in following format

+--------------+
|   InvoiceDate|
+--------------+
|12/1/2010 8:26|
|12/1/2010 8:26|
|12/1/2010 8:26|
|12/1/2010 8:26|
|12/1/2010 8:26|
5

5 Answers

22
votes
spark.sql("set spark.sql.legacy.timeParserPolicy=LEGACY")
df.withColumn("date", to_date(col("InvoiceDate"), "MM/dd/yyyy")).show()


+--------------+----------+
|   InvoiceDate|      date|
+--------------+----------+
|12/1/2010 8:26|2010-12-01|
+--------------+----------+

# in above code spark refers SparkSession
5
votes

You can keep using the new implementation of spark 3 by parsing the string into timestamp first, than cast into a date :

from pyspark.sql import functions as F

dfWithDate = df.withColumn("date", F.to_date(F.to_timestamp(col("InvoiceDate"), "M/d/yyyy H:mm")))

dfWithDate.show()
#+--------------+----------+
#|   InvoiceDate|      date|
#+--------------+----------+
#| 2/1/2010 8:26|2010-02-01|
#| 2/1/2010 8:26|2010-02-01|
#| 2/1/2010 8:26|2010-02-01|
#| 2/1/2010 8:26|2010-02-01|
#|12/1/2010 8:26|2010-12-01|
#+--------------+----------+
2
votes

Instead of using the legacy parser you could also update the date format from MM/dd/yyyy to MM-dd-yyyy

This is not a solution as it returns in NULL values

2
votes

in case you want to keep using the Spark 3.0 version (not use the legacy version of time conversion), you can just use one digit of d in "MM/d/yyyy":

dfWithDate = df.withColumn("date", to_date(col("InvoiceDate"), "MM/d/yyyy"))
1
votes

According to this in spark 3 you should use pattern "M/d/y". It works for me.