I don't see any mention of speculative execution in Apache Beam documentation. But this post claims that it has something like that.
the ParDo transformation is fault-tolerant, i.e. if it crashes, it's rerun. The transformation also has a concept of speculative execution (read about speculative execution in Spark, both are similar basics). The processing for given subset of dataset can be executed on 2 different workers at any time. The results coming from the quickest worker are later used and for the slower one are discarded. At this occasion it's important to emphasize that ParDo implementation must be aware of parallel execution on the same subset of data.
Is it true?