2
votes

I am working on Azure Databrick. I run the python script on the notebook and gets the data from the SQL. I tried to split the date time column into date and time columns. Here is the syntax of python:

    pushdown_query = "(SELECT * FROM STAGE.OutagesAndInterruptions) int_alias"
    df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)

    df['INTERRUPTION_DATE']=df['INTERRUPTION_TIME'].dt.date

df['INTERRUPTION_TIME'] looks like:

+-------------------+
|  INTERRUPTION_TIME|
+-------------------+
|1997-05-12 09:57:00|
|1998-03-08 13:00:00|
|1998-02-26 13:00:00|
|1998-02-26 13:00:00|
|1998-03-03 10:04:00|
|1998-05-20 09:27:00|
|1998-11-21 08:51:00|
|1998-11-27 08:44:00|
|1998-10-19 01:19:00|
|1998-10-19 01:44:00|
|2000-03-13 07:00:00|
|2000-03-19 07:30:00|
|2000-08-04 12:55:00|
|2002-09-30 18:11:00|
|2002-09-30 18:11:00|
|2002-05-06 09:22:00|
|2002-01-16 13:15:00|
|2003-01-08 15:46:00|
|2003-02-04 10:25:00|
|2003-02-04 10:25:00|
+-------------------+

When I ran the code, it throws an error message:

TypeError: 'DataFrame' object does not support item assignment
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<command-2244924718685919> in <module>
----> 1 df['INTERRUPTION_DATE']=df['INTERRUPTION_TIME'].dt.date

TypeError: 'DataFrame' object does not support item assignment

Could we create new columns in the data frame on Data frame? How can we create new columns on the data frame in Azure data bricks?

1
Please share the entire error message, as well as a minimal reproducible example. - AMC
@AMC I have added the entire error message and some context. - user2293224
Just wanted to add that my case was that an assigment worked for Pandas dataframes but not for pyspark dataframes, so transforming from to Pandas did the trick for me, in case it helps anyone else - monkey intern

1 Answers

4
votes

This should work

from pyspark.sql.types import DateType


df2 = df.withColumn('INTERRUPTION_DATE', ,df['INTERRUPTION_TIME'].cast(DateType()))

Edit After Comment:

from pyspark.sql.functions import date_format

df.select(date_format('INTERRUPTION_TIME', 'M/d/yyyy').alias('INTERRUPTION_DATE'),
          date_format('INTERRUPTION_TIME', 'h:m:s a').alias('TIME'))