Suppose you had the following DataFrame:
df = spark.createDataFrame(
[('2018-02-01T13:13:12.023507', ), ('2018-02-01T13:13:12.323507', )],
["date"]
)
df.show(truncate=False)
#+--------------------------+
#|date |
#+--------------------------+
#|2018-02-01T13:13:12.023507|
#|2018-02-01T13:13:12.323507|
#+--------------------------+
unixtimestamp only supports second precision. If you're only concerned with sorting based on the date, you can do the following:
from pyspark.sql.functions import col, unix_timestamp
df.withColumn(
'new_date',
unix_timestamp(col('date'), "yyyy-MM-dd'T'hh:mm:ss").cast("timestamp")
).sort('new_date').show(truncate=False)
#+--------------------------+---------------------+
#|date |new_date |
#+--------------------------+---------------------+
#|2018-02-01T13:13:12.323507|2018-02-01 13:13:12.0|
#|2018-02-01T13:13:12.023507|2018-02-01 13:13:12.0|
#+--------------------------+---------------------+
But since these two example rows have the same date and time up to the second, the sorting here will be indeterminate.
If the sub-second portion is important to you, you can write your own function to handle that. One way is to split the date column on the . and divide by 1000000.0 to get the microseconds. Then add this to the unixtimestamp for sorting:
from pyspark.sql.functions import split
df.withColumn(
'order_column',
unix_timestamp('date', "yyyy-MM-dd'T'hh:mm:ss") + split('date', "\.")[1]/1000000.0
).sort("order_column").show(truncate=False)
#+--------------------------+-------------------+
#|date |order_column |
#+--------------------------+-------------------+
#|2018-02-01T13:13:12.023507|1.517508792023507E9|
#|2018-02-01T13:13:12.323507|1.517508792323507E9|
#+--------------------------+-------------------+