So I have this streaming dataframe (gps_messages) in pyspark-

and I want a resulting dataframe to have same (all) columns but one record / row for each device_unique_id which has highest value for timestamp, so basically something like -
(MAX)
+----------------+-----------+--------+---------+---------+----------+
|device_unique_id|signal_type|latitude|longitude|elevation| Timestamp|
+----------------+-----------+--------+---------+---------+----------+
| TR1 |loc_update |-35.5484|149.61684|666.47164| 12345 | <-- *NOTE - please check below
| TR2 |loc_update |-35.5484|149.61684|666.47164| 87251 |
| TR3 |loc_update |-35.5484|149.61684|666.47164| 32458 |
| TR4 |loc_update |-35.5484|149.61684|666.47164| 98274 |
+----------------+-----------+--------+---------+---------+----------+
*Note = only 1 record for TR1 from previous dataframe which had max value of timeframe among all records having 'device_unique_id'=='TR1'
so far, I have wrote this code,
gps_messages.createOrReplaceTempView('gps_table')
SQL_QUERY = 'SELECT device_unique_id, max(timestamp) as timestamp ' \
'FROM gps_table ' \
'GROUP BY device_unique_id'
# SQL_QUERY1 = 'SELECT * ' \
# 'FROM gps_table t2 ' \
# 'JOIN (SELECT device_unique_id AS unique_id, max(timestamp) AS time ' \
# 'FROM gps_table t1 ' \
# 'GROUP BY unique_id) t1 ' \
# 'ON t2.device_unique_id = t1.unique_id ' \
# 'AND t2.timestamp = t1.time'
filtered_gps_messages = spark.sql(SQL_QUERY)
filtered_gps_messages.createOrReplaceTempView('table_max_ts')
SQL_QUERY = 'SELECT a.device_unique_id, a.signal_type, a.longitude, a.latitude, a.timestamp ' \
'FROM table_max_ts b, gps_table a ' \
'WHERE b.timestamp==a.timestamp AND b.device_unique_id==a.device_unique_id'
latest_data_df = spark.sql(SQL_QUERY)
query = latest_data_df \
.writeStream \
.outputMode('append') \
.format('console') \
.start()
query.awaitTermination()
And it throws out this error -
raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: 'Append output mode not supported when there are streaming aggregations on streaming DataFrames/DataSets without watermark;;\nProject [device_unique_id#25, signal_type#26, latitude#27, longitude#28, elevation#29, timestamp#30, unique_id#43, time#44]\n+- Join Inner, ((device_unique_id#25 = unique_id#43) && (timestamp#30 = time#44))\n :- SubqueryAlias `t2`\n : +- SubqueryAlias `gps_table`\n : +- Project [json#23.device_unique_id AS device_unique_id#25, json#23.signal_type AS signal_type#26, json#23.latitude AS latitude#27, json#23.longitude AS longitude#28, json#23.elevation AS elevation#29, json#23.timestamp AS timestamp#30]\n : +- Project [jsontostructs(StructField(device_unique_id,StringType,true), StructField(signal_type,StringType,true), StructField(latitude,StringType,true), StructField(longitude,StringType,true), StructField(elevation,StringType,true), StructField(timestamp,StringType,true), value#21, Some(Asia/Kolkata)) AS json#23]\n : +- Project [cast(value#8 as string) AS value#21]\n : +- StreamingRelationV2 org.apache.spark.sql.kafka010.KafkaSourceProvider@49a5cdc2, kafka, Map(subscribe -> gpx_points_input, kafka.bootstrap.servers -> 172.17.9.26:9092), [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13], StreamingRelation DataSource(org.apache.spark.sql.SparkSession@611544,kafka,List(),None,List(),None,Map(subscribe -> gpx_points_input, kafka.bootstrap.servers -> 172.17.9.26:9092),None), kafka, [key#0, value#1, topic#2, partition#3, offset#4L, timestamp#5, timestampType#6]\n +- SubqueryAlias `t1`\n +- Aggregate [device_unique_id#25], [device_unique_id#25 AS unique_id#43, max(timestamp#30) AS time#44]\n +- SubqueryAlias `t1`\n +- SubqueryAlias `gps_table`\n +- Project [json#23.device_unique_id AS device_unique_id#25, json#23.signal_type AS signal_type#26, json#23.latitude AS latitude#27, json#23.longitude AS longitude#28, json#23.elevation AS elevation#29, json#23.timestamp AS timestamp#30]\n +- Project [jsontostructs(StructField(device_unique_id,StringType,true), StructField(signal_type,StringType,true), StructField(latitude,StringType,true), StructField(longitude,StringType,true), StructField(elevation,StringType,true), StructField(timestamp,StringType,true), value#21, Some(Asia/Kolkata)) AS json#23]\n +- Project [cast(value#8 as string) AS value#21]\n +- StreamingRelationV2 org.apache.spark.sql.kafka010.KafkaSourceProvider@49a5cdc2, kafka, Map(subscribe -> gpx_points_input, kafka.bootstrap.servers -> 172.17.9.26:9092), [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13], StreamingRelation DataSource(org.apache.spark.sql.SparkSession@611544,kafka,List(),None,List(),None,Map(subscribe -> gpx_points_input, kafka.bootstrap.servers -> 172.17.9.26:9092),None), kafka, [key#0, value#1, topic#2, partition#3, offset#4L, timestamp#5, timestampType#6]\n'
Process finished with exit code 1
if I try with "complete" output mode, it says -
Analysis Exception: Inner Join between two streaming dataframes/datasets is not supported in Complete mode, only in append mode.
What am I doing wrong here? Is there any alternative way or a workaround? Apologies for the type of question, I am new to spark. Thanks.
not supported in Complete mode, only in append mode.orwithout watermark? - OneCricketeer