2
votes

We've found some examples online on attempts to use Hazelcast with Docker Swarm, but so far we've not been able to have the cache replicated. Each swarm node has its own instance. We're using SpringCache to abstract the configuration but we have not yet come up with a solution. Before we invest too much more here, I wanted to see if this is even possible or if anyone has successfully implemented this.

Requirements are a REST endpoint running in Docker Swarm with a distributed cache.

2
Have you had a look at this? It is community driven implementation for Docker Swarm github.com/bitsofinfo/hazelcast-docker-swarm-discovery-spiMesut
That was one of the examples that we first tried to implement.Dale Highfill

2 Answers

0
votes

Running Hazelcast cluster in Docker Swarm is possible, you just need to configure correct network interfaces on the members.

See this blog post describing configuration in non-orchestrated Docker environments: https://hazelcast.com/blog/configuring-hazelcast-in-non-orchestrated-docker-environments/

If you don't want to use a third-party discovery plugin (or write your own), use the TCP-IP join mechanism where you list IP addresses of members explicitly.

The key thing in the configuration is the following: The member has to know its public address under which it's visible from other members. You can configure the public address as a system property hazelcast.local.publicAddress. The value may contain the port number too - e.g.

-Dhazelcast.local.publicAddress=192.168.1.12:11701
0
votes

You can achieve this by changing the Swarm DNS configuration for your service to round robin (dnsrr) so dns resolution will returns all the IP adresses of the service replicas instead of one randomly. Then you can set all the IPs in your hazelcast cluster configuration.

Full solution is described here: https://antoine-thecoon.medium.com/deploy-hazelcast-cluster-in-a-replicated-swarm-service-97558db5f98