0
votes

We have ActiveMq 5.15.2 in following configuration:

  • PostgreSQL for persistance
  • two nodes, one in standby
  • JDBC master slave with shared database
  • static cluster discovery

Everything seams to be fine, failover works as expected, but sometimes during failover (or restart of whole cluster) we are observing following exception:

 WARN  [ActiveMQ NIO Worker 6] org.apache.activemq.transaction.LocalTransaction  - Store COMMIT FAILED:java.io.IOException: Batch entry 2 INSERT INTO ACTIVEMQ_MSGS(ID, MSGID_PROD, MSGID_SEQ, CONTAINER, EXPIRATION, PRIORITY, MSG, XID) VALUES (...) was aborted:  Unique-Constraint activemq_msgs_pkey Detail: key(id)=(7095330) alerady exists

ActiveMQ propagates this exception directly to the client.

I thought, that ActiveMQ would be able to recognise duplicated message, but something goes wrong here.....

The client tries to deliver message with already existing ID, should not ActiveMQ compare this message to one already existing in storage (if possible, depending on DB) and if both messages are the same just ignore second message?

Or maybe ActiveMQ assumes that duplicated messages are allowed to be persisted and our DB structure is not correct (constraint on id)?

CREATE TABLE activemq_msgs
(
   id          bigint          NOT NULL,
   container   varchar(250),
   msgid_prod  varchar(250),
   msgid_seq   bigint,
   expiration  bigint,
   msg         bytea,
   priority    bigint,
   xid         varchar(250)
);


ALTER TABLE activemq_msgs
   ADD CONSTRAINT activemq_msgs_pkey
   PRIMARY KEY (id);

Should we drop activemq_msgs_pkey ?

1

1 Answers

0
votes

Our JDBC configuration was incorrect - autocommit was set to false, and in result messages were propagated in DB with delay.