0
votes

I'm trying to get a streaming aggregation/groupBy working in append output mode, to be able to use the resulting stream in a stream-to-stream join. I'm working on (Py)Spark 2.3.2, and I'm consuming from Kafka topics.

My pseudo-code is something like below, running in a Zeppelin notebook

orderStream = spark.readStream().format("kafka").option("startingOffsets", "earliest").....

orderGroupDF = (orderStream
    .withWatermark("LAST_MOD", "20 seconds")
    .groupBy("ID", window("LAST_MOD", "10 seconds", "5 seconds"))
    .agg(
        collect_list(struct("attra", "attrb2",...)).alias("orders"),
        count("ID").alias("number_of_orders"),
        sum("PLACED").alias("number_of_placed_orders"),
        min("LAST_MOD").alias("first_order_tsd")
    )
)

debug = (orderGroupDF.writeStream
  .outputMode("append")
  .format("memory").queryName("debug").start()
)

After that, I would expected that data appears on the debug query and I can select from it (after the late arrival window of 20 seconds has expired. But no data every appears on the debug query (I waited several minutes)

When I changed output mode to update the query works immediately.

Any hint what I'm doing wrong?

EDIT: after some more experimentation, I can add the following (but I still don't understand it).

When starting the Spark application, there is quite a lot of old data (with event timestamps << current time) on the topic from which I consume. After starting, it seems to read all these messages (MicroBatchExecution in the log reports "numRowsTotal = 6224" for example), but nothing is produced on the output, and the eventTime watermark in the log from MicroBatchExecution stays at epoch (1970-01-01).

After producing a fresh message onto the input topic with eventTimestamp very close to current time, the query immediately outputs all the "queued" records at once, and bumps the eventTime watermark in the query.

What I can also see that there seems to be an issue with the timezone. My Spark programs runs in CET (UTC+2 currently). The timestamps in the incoming Kafka messages are in UTC, e.g "LAST__MOD": "2019-05-14 12:39:39.955595000". I have set spark_sess.conf.set("spark.sql.session.timeZone", "UTC"). Still, the microbatch report after that "new" message has been produced onto the input topic says

"eventTime" : {
  "avg" : "2019-05-14T10:39:39.955Z",
  "max" : "2019-05-14T10:39:39.955Z",
  "min" : "2019-05-14T10:39:39.955Z",
  "watermark" : "2019-05-14T10:35:25.255Z"
},

So the eventTime somehow links of with the time in the input message, but it is 2 hours off. The UTC difference has been subtraced twice. Additionally, I fail to see how the watermark calculation works. Given that I set it to 20 seconds, I would have expected it to be 20 seconds older than the max eventtime. But apparently it is 4 mins 14 secs older. I fail to see the logic behind this.

I'm very confused...

1
I suspect that the watermark is not strictly set to MAX(eventtime)-threshold. Rather it seems that for watermarking, it only considers events that have an eventtime > query-start-time. At least that's how it looks like to mejammann

1 Answers

0
votes

It seems that this was related to the Spark version 2.3.2 that I used, and maybe more concretely to SPARK-24156. I have upgraded to Spark 2.4.3 and here I get the results of the groupBy immediately (well, of course after the watermark lateThreshold has expired, but "in the expected timeframe".