0
votes

I am trying to split my Date Column which is a String Type right now into 3 columns Year, Month and Date. I use (PySpark):

split_date=pyspark.sql.functions.split(df['Date'], '-')     
df= df.withColumn('Year', split_date.getItem(0))
df= df.withColumn('Month', split_date.getItem(1))
df= df.withColumn('Day', split_date.getItem(2))

I run into an issue, because half my dates are separated by '-' and the other half are separated by '/'. How can I use and or operation to split the Date by either '-' or '/' depending on the use case. Additionaly, when its separated by '/', the format is mm/dd/yyyy and when separated by '-', the format is yyyy-mm-dd.

I want the Date column to be separated into Day, Month and Year.

3

3 Answers

1
votes

You just need little bit of extra coding to recognize type of date format . for example, lets say your data is in below format -

data = [("2008-05-01",1),("2018-01-01",2),("03/14/2017",3),("01/01/2018",4)]
df = spark.createDataFrame(data,schema=['date','key'])

df.show()

:

+----------+---+
|      date|key|
+----------+---+
|2008-05-01|  1|
|2018-01-01|  2|
|03/14/2017|  3|
|01/01/2018|  4|
+----------+---+

:

from pyspark.sql.functions import *
from pyspark.sql.types import *

# udf that recognise pattern and return list of year,month and day
def splitUDF(row):
    if "/" in row:
        mm,dd,yyyy = row.split("/")
    elif "-" in row:
        yyyy,mm,dd = row.split("-")

    return [yyyy,mm,dd]


datSplitterUDF = udf(lambda row : splitUDF(row),ArrayType(StringType()))
df\
.select(datSplitterUDF(df.date).alias("dt"))\
.withColumn('year',col('dt').getItem(0).cast('int'))\
.withColumn('month',col('dt').getItem(1).cast('int'))\
.withColumn('day',col('dt').getItem(2).cast('int'))\
.show()

output:

+--------------+----+-----+---+
|            dt|year|month|day|
+--------------+----+-----+---+
|[2008, 05, 01]|2008|    5|  1|
|[2018, 01, 01]|2018|    1|  1|
|[2017, 03, 14]|2017|    3| 14|
|[2018, 01, 01]|2018|    1|  1|
+--------------+----+-----+---+
0
votes

Try this :

split_date=pyspark.sql.functions.split(df['Date'], '[-/]')
0
votes

Adding to @Pushkr solution. You can also use dateutil function to parse date format into datetime. Here is snippet to do that.

import pyspark.sql.functions as func
from pyspark.sql.types import *
from dateutil import parser

def parse_date(date):
    dt = parser.parse(date)
    return [dt.year, dt.month, dt.day]
udf_parse_date = func.udf(lambda x: parse_date(x), returnType=ArrayType(IntegerType()))

data = [("2008-05-01",1), ("2018-01-01",2), ("03/14/2017",3), ("01/01/2018",4)]
df = spark.createDataFrame(data, schema=['date','key'])
df = df.select('date', 'key', udf_parse_date('date').alias('date_parse'))
df_parsed = df.select('key', 
                      func.col('date_parse').getItem(0).alias('year'), 
                      func.col('date_parse').getItem(1).alias('month'), 
                      func.col('date_parse').getItem(2).alias('day'))