This is my current setup:
queue1 and queue2 are marged together with integration flow to channel1:
@Bean
public IntegrationFlow q1f() {
return IntegrationFlows
.from(queue1InboundAdapter())
...
.channel(amqpInputChannel())
.get();
}
@Bean
public IntegrationFlow q2f() {
return IntegrationFlows
.from(queue2InboundAdapter())
...
.channel(amqpInputChannel())
.get();
}
then, everything is aggregated and then confirmed after aggregated message is confirmed by rabbitmq:
@Bean
public IntegrationFlow aggregatingFlow() {
return IntegrationFlows
.from(amqpInputChannel())
.aggregate(...
.expireGroupsUponCompletion(true)
.sendPartialResultOnExpiry(true)
.groupTimeout(TimeUnit.SECONDS.toMillis(10))
.releaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(200, TimeUnit.SECONDS.toMillis(10)))
)
.handle(amqpOutboundEndpoint())
.get();
}
@Bean
public AmqpOutboundEndpoint amqpOutboundEndpoint() {
AmqpOutboundEndpoint outboundEndpoint = new AmqpOutboundEndpoint(ackTemplate());
outboundEndpoint.setConfirmAckChannel(manualAckChannel());
outboundEndpoint.setConfirmCorrelationExpressionString("#root");
outboundEndpoint.setExchangeName(RABBIT_PREFIX + "ix.archiveupdate");
outboundEndpoint.setRoutingKeyExpression(routingKeyExpression()); //forward using patition id as routing key
return outboundEndpoint;
}
ackTemplate() is set with cf that has springFactory.setPublisherConfirms(true);.
The problem I see is that once in 10 days, there are some messages that are stuck in unacknowledged state in rabbitmq.
My guess is that somehow publish of message is waiting for rabbit to do PUBLISHER CONFIRMS but it never gets it and times out? In this case, I never ACK message in queue1. Is this possible?
So just one more time complete workflow:
[two queues -> direct channel -> aggregator (keeps channel and tag values) -> publish to rabbit -> rabbit returns ACK via publisher confirms -> spring confirms all messages on channel+values that it kept in memory for aggregated message]
I also have my implementation of aggregator (since I need to manually ack messages from both q1 and q2):
public abstract class AbstractManualAckAggregatingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
public static final String MANUAL_ACK_PAIRS = PREFIX + "manualAckPairs";
private AckingState ackingState;
public AbstractManualAckAggregatingMessageGroupProcessor(AckingState ackingState){
this.ackingState = ackingState;
}
@Override
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
Map<String, Object> aggregatedHeaders = super.aggregateHeaders(group);
List<ManualAckPair> manualAckPairs = new ArrayList<>();
group.getMessages().forEach(m -> {
Channel channel = (Channel)m.getHeaders().get(AmqpHeaders.CHANNEL);
Long deliveryTag = (Long)m.getHeaders().get(AmqpHeaders.DELIVERY_TAG);
manualAckPairs.add(new ManualAckPair(channel, deliveryTag, ackingState));
});
aggregatedHeaders.put(MANUAL_ACK_PAIRS, manualAckPairs);
return aggregatedHeaders;
}
}
UPDATE
This is how rabbit admin looks (2 unacked messages for a long time, and it will not be ACKED untill restart - when it is redelivered):

ConnectionFactoryfor theackTemplate, so AMQP channels are not blocked while you're sending. - Artem Bilan