5
votes

I am deploying a zookeeper cluster which has 3 nodes. I use it to keep my mesos master high availability. I download the zookeeper-3.4.6.tar.gz tarball and uncompress it to /opt, rename it to /opt/zookeeper, enter the directory, edit the conf/zoo.cfg(pasted below), create a myid file in dataDir(which is set to /var/lib/zookeeper in zoo.cfg), and start zookeeper using ./bin/zkServer.sh start, and it goes well. I start all the 3 nodes one by one and they all seems well. I use ./bin/zkCli.sh to connect the server , no problem.

But when I start mesos (3 masters and 3 slaves, each node runs a master and a slave), then the masters soon crashed, one by one, and in the webpage http://mesos_master:5050, slave tab, no slaves are displayed. But when I run only one zookeeper, these are all fine. So I think it's the zookeeper cluster's problem.

I got 3 PV host in my ubuntu server. they are all running ubuntu 14.04 LTS: node-01, node-02, node-03, I have /etc/hosts in all three nodes like this:

172.16.2.70     node-01
172.16.2.81     node-02
172.16.2.80     node-03

I installed zookeeper, mesos on all the three nodes. Zookeeper configure file is like this (all three nodes) :

tickTime=2000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=node-01:2888:3888
server.2=node-02:2888:3888
server.3=node-03:2888:3888

they can be started normally and run well. And then I start the mesos-master service, using the command line ./bin/mesos-master.sh --zk=zk://172.16.2.70:2181,172.16.2.81:2181,172.16.2.80:2181/mesos --work_dir=/var/lib/mesos --quorum=2, and after a few seconds, it gives me errors like this:

F0817 15:09:19.995256  2250 master.cpp:1253] Recovery failed: Failed to recover registrar: Failed to perform fetch within 1mins
*** Check failure stack trace: ***
    @     0x7fa2b8be71a2  google::LogMessage::Fail()
    @     0x7fa2b8be70ee  google::LogMessage::SendToLog()
    @     0x7fa2b8be6af0  google::LogMessage::Flush()
    @     0x7fa2b8be9a04  google::LogMessageFatal::~LogMessageFatal()

▽
    @     0x7fa2b81a899a  mesos::internal::master::fail()

▽
    @     0x7fa2b8262f8f  _ZNSt5_BindIFPFvRKSsS1_EPKcSt12_PlaceholderILi1EEEE6__callIvJS1_EJLm0ELm1EEEET_OSt5tupleIJDpT0_EESt12_Index_tupleIJXspT1_EEE

▽
    @     0x7fa2b823fba7  _ZNSt5_BindIFPFvRKSsS1_EPKcSt12_PlaceholderILi1EEEEclIJS1_EvEET0_DpOT_
    @     0x7fa2b820f9f3  _ZZNK7process6FutureI7NothingE8onFailedISt5_BindIFPFvRKSsS6_EPKcSt12_PlaceholderILi1EEEEvEERKS2_OT_NS2_6PreferEENUlS6_E_clES6_
    @     0x7fa2b826305c  _ZNSt17_Function_handlerIFvRKSsEZNK7process6FutureI7NothingE8onFailedISt5_BindIFPFvS1_S1_EPKcSt12_PlaceholderILi1EEEEvEERKS6_OT_NS6_6PreferEEUlS1_E_E9_M_invokeERKSt9_Any_dataS1_
    @           0x4a44e7  std::function<>::operator()()
    @           0x49f3a7  _ZN7process8internal3runISt8functionIFvRKSsEEJS4_EEEvRKSt6vectorIT_SaIS8_EEDpOT0_
    @           0x499480  process::Future<>::fail()
    @     0x7fa2b806b4b4  process::Promise<>::fail()
    @     0x7fa2b826011b  process::internal::thenf<>()
    @     0x7fa2b82a0757  _ZNSt5_BindIFPFvRKSt8functionIFN7process6FutureI7NothingEERKN5mesos8internal8RegistryEEERKSt10shared_ptrINS1_7PromiseIS3_EEERKNS2_IS7_EEESB_SH_St12_PlaceholderILi1EEEE6__callIvISM_EILm0ELm1ELm2EEEET_OSt5tupleIIDpT0_EESt12_Index_tupleIIXspT1_EEE
    @     0x7fa2b82962d9  std::_Bind<>::operator()<>()
    @     0x7fa2b827ee89  std::_Function_handler<>::_M_invoke()
I0817 15:09:20.098639  2248 http.cpp:283] HTTP GET for /master/state.json from 172.16.2.84:54542 with User-Agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36'
    @     0x7fa2b8296507  std::function<>::operator()()
    @     0x7fa2b827efaf  _ZZNK7process6FutureIN5mesos8internal8RegistryEE5onAnyIRSt8functionIFvRKS4_EEvEES8_OT_NS4_6PreferEENUlS8_E_clES8_
    @     0x7fa2b82a07fe  _ZNSt17_Function_handlerIFvRKN7process6FutureIN5mesos8internal8RegistryEEEEZNKS5_5onAnyIRSt8functionIS8_EvEES7_OT_NS5_6PreferEEUlS7_E_E9_M_invokeERKSt9_Any_dataS7_
    @     0x7fa2b8296507  std::function<>::operator()()
    @     0x7fa2b82e4419  process::internal::run<>()
    @     0x7fa2b82da22a  process::Future<>::fail()
    @     0x7fa2b83136b5  std::_Mem_fn<>::operator()<>()
    @     0x7fa2b830efdf  _ZNSt5_BindIFSt7_Mem_fnIMN7process6FutureIN5mesos8internal8RegistryEEEFbRKSsEES6_St12_PlaceholderILi1EEEE6__callIbIS8_EILm0ELm1EEEET_OSt5tupleIIDpT0_EESt12_Index_tupleIIXspT1_EEE
    @     0x7fa2b8307d7f  _ZNSt5_BindIFSt7_Mem_fnIMN7process6FutureIN5mesos8internal8RegistryEEEFbRKSsEES6_St12_PlaceholderILi1EEEEclIJS8_EbEET0_DpOT_
    @     0x7fa2b82fe431  _ZZNK7process6FutureIN5mesos8internal8RegistryEE8onFailedISt5_BindIFSt7_Mem_fnIMS4_FbRKSsEES4_St12_PlaceholderILi1EEEEbEERKS4_OT_NS4_6PreferEENUlS9_E_clES9_
    @     0x7fa2b830f065  _ZNSt17_Function_handlerIFvRKSsEZNK7process6FutureIN5mesos8internal8RegistryEE8onFailedISt5_BindIFSt7_Mem_fnIMS8_FbS1_EES8_St12_PlaceholderILi1EEEEbEERKS8_OT_NS8_6PreferEEUlS1_E_E9_M_invokeERKSt9_Any_dataS1_
    @           0x4a44e7  std::function<>::operator()()
    @           0x49f3a7  _ZN7process8internal3runISt8functionIFvRKSsEEJS4_EEEvRKSt6vectorIT_SaIS8_EEDpOT0_
    @     0x7fa2b82da202  process::Future<>::fail()
    @     0x7fa2b82d2d82  process::Promise<>::fail()
Aborted

sometimes the warning is like this, and then crashed with the same output above:

0817 15:09:49.745750  2104 recover.cpp:111] Unable to finish the recover protocol in 10secs, retrying

I want to know whether zookeeper is deployed and run well in my case, and How can I locate where the problem is. Any answers and suggests are welcomed. thanks.

4
Do you start master in all nodes? And could you use zkCli.sh connect zk://172.16.2.70:2181,172.16.2.81:2181,172.16.2.80:2181 success in every node?haosdent
hi @haosdent, I have started all three master in the three nodes. Using zkCli.sh I can connect any of the zk server. Later I tried to start only one zookeeper node, in the condition, no matter I start multi-masters or one master, they all worked well. So I suspect it's the zookeeper's configuration that makes mesos-master crashed.Steven Xue
I found a similiar error here. github.com/mesos/chronos/issues/511#issuecomment-129594290 He have similiar error in below commenthaosdent
We have a jira issue to trace mesos crash when network partition. issues.apache.org/jira/browse/MESOS-3280 Seems similiar with your problem.haosdent
Hi, could you please attach a complete master log (all three masters) to the Mesos Jira issue that @haosdent mentioned above? We have a couple of people actively triaging that issue and another instance would be greatly appreciated.Connor Doyle

4 Answers

1
votes

Actually, in my case, It's because I didn't open firewall port 5050 to allow three servers to communicate with each others. After updating firewall rule, it starts to work as expected.

1
votes

I fall into same issue, I tried different ways and different options and finally --ip option worked for me. Initially I used --hostname option

mesos-master --ip=192.168.0.13 --quorum=2 --zk=zk://m1:2181,m2:2181,m3:2181/mesos --work_dir=/opt/mm1 --log_dir=/opt/mm1/logs
0
votes

You need to check that all mesos/zookeeper master nodes can communicate correctly. For that, you need:

  • Zookeeper ports open: TCP 2181, 2888, 3888
  • Mesos port open: TCP 5050
  • ping available (ICMP message 0 and 8)

If you use FQDN instead of IP in your config, check that the DNS resolution is working correctly as well.

-1
votes

Split your mesos masters' work_dir to different dir, do not use a share work_dir for all masters, because of zk