2
votes

We setted up an arangodb cluster with 3 agents,5 coordinators and 5 db servers on 5 servers.

Env: Centos 6

We had the experience that if it exceeded the max memory on one of the servers,the cluster would fail entirely. In order to avoid it and as we didn't find a way to control the memory use,we observe every nodes regularly with the command top |grep arangod and restart the ones if they consume too much. It usually works fine. But as we tried to restart one node,we received the logs as follow:

    2018/03/27 15:47:31 Failed to get master URL, retrying in 5sec (All 3 servers responded with temporary failure)
    2018/03/27 15:47:31 ## Start of dbserver log
        2018-03-27T07:46:31Z [37755] WARNING {memory} It is recommended to set NUMA to interleaved.
        2018-03-27T07:46:31Z [37755] WARNING {memory} put 'numactl --interleave=all' in front of your command
        2018-03-27T07:46:31Z [37755] INFO using storage engine rocksdb
        2018-03-27T07:46:31Z [37755] INFO {cluster} Starting up with role PRIMARY
        2018-03-27T07:46:41Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 21 (9.84s). Network checks advised.
        2018-03-27T07:46:42Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 22 (10.82s). Network checks advised.
        2018-03-27T07:46:43Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 23 (11.89s). Network checks advised.
        2018-03-27T07:46:44Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 24 (13.03s). Network checks advised.
        2018-03-27T07:46:46Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 25 (14.25s). Network checks advised.
        2018-03-27T07:46:47Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 26 (15.57s). Network checks advised.
        2018-03-27T07:46:48Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 27 (16.99s). Network checks advised.
        2018-03-27T07:46:50Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 28 (18.51s). Network checks advised.
        2018-03-27T07:46:51Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 29 (20.15s). Network checks advised.
        2018-03-27T07:46:53Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 30 (21.9s). Network checks advised.
        2018-03-27T07:46:55Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 31 (23.8s). Network checks advised.
        2018-03-27T07:46:57Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 32 (25.83s). Network checks advised.
        2018-03-27T07:46:59Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 33 (28.01s). Network checks advised.
        2018-03-27T07:47:02Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 34 (30.36s). Network checks advised.
        2018-03-27T07:47:04Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 35 (32.89s). Network checks advised.
        2018-03-27T07:47:04Z [37755] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 36 (32.89s). Network checks advised.
2018/03/27 15:47:31 ## End of dbserver log
2018/03/27 15:47:32 ## Start of coordinator log
        2018-03-27T07:46:32Z [37769] WARNING {memory} It is recommended to set NUMA to interleaved.
        2018-03-27T07:46:32Z [37769] WARNING {memory} put 'numactl --interleave=all' in front of your command
        2018-03-27T07:46:32Z [37769] INFO using storage engine rocksdb
        2018-03-27T07:46:32Z [37769] INFO {cluster} Starting up with role COORDINATOR
        2018-03-27T07:46:42Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 21 (9.84s). Network checks advised.
        2018-03-27T07:46:43Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 22 (10.82s). Network checks advised.
        2018-03-27T07:46:44Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 23 (11.89s). Network checks advised.
        2018-03-27T07:46:45Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 24 (13.03s). Network checks advised.
        2018-03-27T07:46:47Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 25 (14.25s). Network checks advised.
        2018-03-27T07:46:48Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 26 (15.57s). Network checks advised.
        2018-03-27T07:46:49Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 27 (16.99s). Network checks advised.
        2018-03-27T07:46:51Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 28 (18.51s). Network checks advised.
        2018-03-27T07:46:52Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 29 (20.14s). Network checks advised.
        2018-03-27T07:46:54Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 30 (21.9s). Network checks advised.
        2018-03-27T07:46:56Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 31 (23.8s). Network checks advised.
        2018-03-27T07:46:58Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 32 (25.83s). Network checks advised.
        2018-03-27T07:47:00Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.30:8531. Unsuccessful consecutive tries: 33 (28.01s). Network checks advised.
        2018-03-27T07:47:03Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 34 (30.36s). Network checks advised.
        2018-03-27T07:47:05Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.29:8531. Unsuccessful consecutive tries: 35 (32.89s). Network checks advised.
        2018-03-27T07:47:05Z [37769] INFO {agencycomm} Flaky agency communication to http+tcp://65.18.27.28:8531. Unsuccessful consecutive tries: 36 (32.89s). Network checks advised.
2018/03/27 15:47:32 ## End of coordinator log
2018/03/27 15:47:46 Failed to get master URL, retrying in 5sec (All 3 servers responded with temporary failure)

All the servers ping well between each other,so it's not a problem of network.

Just as I was writing this question and collecting log info,the cluster restarted successfully. It is kind of wierd. And now 2 of the nodes print the log as

updated cluster config does not contain myself. rejecting

It's now taking really long time to show collections and the cluster is not working normally. Anybody know why?

1
This might be the best place for this: github.com/arangodb/arangodb/issuesAndrew Grothe
@Andrew,thank you for your advice, I mentioned this question in this issueAmandine-G
Would be a good idea to post the answer to your question based on the discussion on github. You can answer your own question and accept that answer, so people know the issue is resolved.Andrew Grothe
Does the problem still persist? Do you need help with the issue above?Kaveh Vahedipour
@KavehVahedipour, yes,I still don't know the actual cause of this problem and how to avoid it. Can you help me?Amandine-G

1 Answers

0
votes

[quoting the github discussion]

Please note that the command --cluster.agency-size 5 has to be used only when starting the Cluster for the first time. This is due to the starter that on the first startup it writes the configuration of the Cluster that cannot be changed anymore.

So in your case, if you need to add more agents in additional nodes, you have to use --cluster.start-agent true on each newly nodes If you want to be sure that your 5 nodes cluster is up an running when bringing down two (random) nodes, then an agency size = 5 is what you need

The Cluster cannot work if the Agency is not up and running. The Agency makes use of the RAFT protocol. If your Agency is made of 3 Agents, then if two are down then the Agency is down (and same for your Cluster). If you Agency is made of 5 Agents, then if two agents are down the Agency will survive (and same for your Cluster)

If you want to survive to 3 machines down, then other setups are possible

You can also consider using separate machines for the Agency, e.g.:

  • 3 dedicated machines for the Agency
  • plus additional 3 machines for DBServers+Coordinators (total 6 machines) with Replication factor = 3

The above setup will survive to 1 Agent down and 2 DBServers down (so 3 machines down in total)