From https://spark.apache.org/docs/latest/streaming-programming-guide.html#window-operations it says that reduceByKeyAndWindow
"Return a new single-element stream, created by aggregating elements in the stream over a sliding interval using func"
The example given was if we wanted generating word counts over the last 30 seconds of data, every 10 seconds.
The part that I am confused about it's how exactly does reduceByKeyAndWindow
work. Because a windowed stream is composed of multiple RDDs. In this case wouldn't reduceByKeyAndWindow
just return a stream of RDDs instead of one RDD?