3
votes

I want to create a rabbitMQ cluster that is very resilient to failures.

So far I've managed to create a cluster with three nodes, each of these nodes runs inside a docker container. To enable the nodes to join a cluster, the hosts needs to know each other via links.

Now, the whole architecture runs in the cloud (on AWS to be precise). So far, my containers can only be linked with one another, when they run on the same AWS-instance. I want to create the cluster in a way that nodes can lie on diffrent hosts.

So far I've tried:

  1. Using federation / shovel instead. This does not serve my purpose, because I need CP from the CAP-Theorem and not CA. I need my nodes to be all replicas of one another and to be able to act as the same broker to clients.

  2. Creating a docker swarm. I am able to set-up the docker swarm and connect two instances via docker swarm. But if I try to run multiple rabbit-containers on this swarm, they either are placed on the same node or I cannot link them with one another. So I end up with the same constellation, of all my hosts running on the same node.

Is there any solution or other approach, with which I could create a rabbitmMQ-cluster inside docker containers among different AWS-instances / hosts?

1

1 Answers

0
votes

You mentioned "To enable the nodes to join a cluster, the hosts needs to know each other via links" .. but I'm not sure why you require the links. The main thing is for each of the containers to be able to locate each other, which is basically a service discovery problem. I've done something similar (with rabbitmq) using consul on a bunch of different VMs (not even using swarm), but if you're already running a swarm cluster then you might be able to hit the API to discover other instances of the container?