Versions:
Spring-cloud-stream-starter-rabbit --> 2.1.0.RELEASE
RabbitMQ --> 3.7.7
Erlang --> 21.1
(1) I have created a sample mq-publisher-demo & mq-subscriber-demo repositories on github for reference.
When Memory Alarm was activated
Publisher: was able to publish messages.
Subscriber: seems like, the subscriber was receiving messages in batch with few delays.
When Disk Alarm was activated
Publisher: was able to publish messages.
Subscriber: seems like, the subscriber was not receiving messages while Disk Alarm was activated. but once the alarm was deactivated, all messages were received by the subscriber.
Are the messages getting buffered somewhere?
Is this the expected behavior? (because I was expecting RabbitMQ will stop receiving messages from the publisher and the subscriber will never get any subsequent messages once any of the alarms are activated.)
(2) Spring Cloud Stream document says below. Does it mean the above behaviour? (avoiding deadlock & keep publisher publishing the messages)
Starting with version 2.0, the RabbitMessageChannelBinder sets the RabbitTemplate.userPublisherConnection property to true so that the non-transactional producers avoid deadlocks on consumers, which can happen if cached connections are blocked because of a memory alarm on the broker.
(3) Do we have something similar for Disk alarm
also to avoid deadlocks?
(4) If the producer's message will not be accepted by RabbitMQ, then is it possible to throw specific exception to the publisher from spring-cloud-stream (saying alarms are activated and message publish failed)?
I'm kind of new about these alarms in spring-cloud-stream, please help me to understand clearly. Thanking you.