0
votes

I want to ensure that certain kind of messages couldn't be lost, hence I should use Confirms (aka Publisher Acknowledgements).

The broker loses persistent messages if it crashes before said messages are written to disk. Under certain conditions, this causes the broker to behave in surprising ways.

For instance, consider this scenario:

  • a client publishes a persistent message to a durable queue
  • a client consumes the message from the queue (noting that the message is persistent and the queue durable), but doesn't yet ack it,
  • the broker dies and is restarted, and
  • the client reconnects and starts consuming messages.

At this point, the client could reasonably assume that the message will be delivered again. This is not the case: the restart has caused the broker to lose the message. In order to guarantee persistence, a client should use confirms.

But what if, using confirms, the Publisher goes down before receive the ack and the message wasn't delivery to the queue for some reason (i.e. network failure).

Suppose we have a simple REST endpoint where we can POST new COMMENTS and, when a new COMMENT is created we want to publish a message in a queue. (Note: it doesn't matter if I send a message of a new COMMENT that at the end isn't created due to a rollback for example).

CommentEndpoint {

  Channel channel;

  post(String comment) {
    channel.publish("comments-queue",comment) // is a persistent queue
    Comment aNewComment = new Comment(comment)
    repository.save(comment)
    // what happens if the server where this publisher is running terminates here ?
    channel.waitConfirmations()
  }

}

When the server restarts the channel is gone and the message could never be delivered. One solution that comes to my mind is that after a restart, query the recent comments (¿something like the comments created between the last 3 min before the crash?) in the repository and send one message for each one and await confirmations.

1

1 Answers

1
votes

What you are worried about is really no longer RabbitMQ only issue, it is a distributed transaction issue. This discussion gives one reasonable lightweight solution. And there are more strict solutions, for instance, two-phase commit, three-phase commit, etc, to ensure data consistent when it is really necessary.