I am new to akka and akka remoting and akka cluster. I have build a system with following config
Application.conf
akka {
actor.provider = "akka.cluster.ClusterActorRefProvider"
extensions = ["akka.contrib.pattern.ClusterReceptionistExtension"]
actor.provider = "akka.cluster.ClusterActorRefProvider"
remote{
netty.tcp{
port = 0
hostname = "127.0.0.1"
}
}
cluster {
seed-nodes = [
"akka.tcp://[email protected]:8551",
"akka.tcp://[email protected]:8552"]
auto-down-unreachable-after = 10s
}
extensions = ["akka.contrib.pattern.ClusterReceptionistExtension"]
persistence {
journal.plugin = "akka.persistence.journal.leveldb-shared"
journal.leveldb-shared.store {
# DO NOT USE 'native = off' IN PRODUCTION !!!
native = off
dir = "target/shared-journal"
}
snapshot-store.local.dir = "target/snapshots"
}
}
worker.conf
akka {
actor.provider = "akka.remote.RemoteActorRefProvider"
remote{
netty.tcp{
port = 0
hostname = "127.0.0.1"
}
}
}
contact-points = [
"akka.tcp://[email protected]:8551",
"akka.tcp://[email protected]:8552"]
The undestanding was akka system would start in my local and would use the seed nodes to form cluster. does this mean the seed nodes should be aldready running. meaning should the process be aldready started on these ip:port?
The reason is: if dont have this procress aldready running i get association failed and because its gated.
*******UPADTE*******
The above issue was becuse of using floating ips. My nodes are running on Openstack Vm's and they do have a static IP. Using static IP solved issue.
ANother intresting find. When nodes are started the hostname in remote.netty.tcp should be the machines inet and as Ryan mentioned one of the seednodes need to be up for the cluster to start hence having the local machine to be a seed node is better. And prefer using localhost inet IP instead of 127.0.0.1 if you have distributed seed nodes.