I'm studying distributed systems and referring to this old question: stackoverflow link
I really can't understand the difference between exactly-once, at-least-once and at-most-once guarantees, I read these concepts in Kafka, Flink and Storm and Cassandra also. For instance someone says that Flink is better because has exactly-once guarantees while Storm has only at-least-once.
I understand that exactly-once mode is better for latency but at the same time it's worse for fault tolerance right? How can recover a stream if I haven't duplicates? and then... if this is a real problem, why exactly-once guarantee is considered better than others?
Someone can give me better definitions?