1
votes

I am running a pacemaker cluster with corosync on two nodes. I had to restart node2 and after reboot and doing

service corosync start

corosync is started but shuts down itself immediately.

After the log entry "Completed service synchronization, ready to provide service." there is an entry "Node was shut down by a signal" and the shutdown starts.

This is the complete log output:

notice  [MAIN  ] Corosync Cluster Engine ('2.3.4'): started and ready to 
provide service.
info    [MAIN  ] Corosync built-in features: debug testagents augeas systemd pie relro bindnow
warning [MAIN  ] member section is deprecated.
warning [MAIN  ] Please migrate config file to nodelist.
notice  [TOTEM ] Initializing transport (UDP/IP Unicast).
notice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
notice  [TOTEM ] Initializing transport (UDP/IP Unicast).
notice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
notice  [TOTEM ] The network interface [192.168.1.102] is now up.
notice  [SERV  ] Service engine loaded: corosync configuration map access [0]
info    [QB    ] server name: cmap
notice  [SERV  ] Service engine loaded: corosync configuration service [1]
info    [QB    ] server name: cfg
notice  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
info    [QB    ] server name: cpg
notice  [SERV  ] Service engine loaded: corosync profile loading service [4]
notice  [QUORUM] Using quorum provider corosync_votequorum
notice  [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
notice  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
info    [QB    ] server name: votequorum
notice  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
info    [QB    ] server name: quorum
notice  [TOTEM ] The network interface [x.x.x.3] is now up.
notice  [TOTEM ] adding new UDPU member {x.x.x.3}
notice  [TOTEM ] adding new UDPU member {x.x.x.2}
warning [TOTEM ] Incrementing problem counter for seqid 1 iface x.x.x.3 to [1 of 10]
notice  [TOTEM ] A new membership (192.168.1.102:7420) was formed. Members joined: -1062731418
notice  [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
notice  [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
notice  [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
notice  [QUORUM] Members[1]: -1062731418
notice  [MAIN  ] Completed service synchronization, ready to provide service.
notice  [MAIN  ] Node was shut down by a signal
notice  [SERV  ] Unloading all Corosync service engines.
info    [QB    ] withdrawing server sockets
notice  [SERV  ] Service engine unloaded: corosync vote quorum service v1.0
info    [QB    ] withdrawing server sockets
notice  [SERV  ] Service engine unloaded: corosync configuration map access
info    [QB    ] withdrawing server sockets
notice  [SERV  ] Service engine unloaded: corosync configuration service
info    [QB    ] withdrawing server sockets
notice  [SERV  ] Service engine unloaded: corosync cluster closed process group service v1.01
info    [QB    ] withdrawing server sockets
notice  [SERV  ] Service engine unloaded: corosync cluster quorum service v0.1
notice  [SERV  ] Service engine unloaded: corosync profile loading service
notice  [MAIN  ] Corosync Cluster Engine exiting normally
1
I also setup a completely clean openSUSE 13.2 virtual machine and corosync is also shutdown in this clean environmentefdev1234

1 Answers

0
votes

This seems to be an issue with openSUSE 13.2.

Since this version you can find the line

StopWhenUnneeded=yes

in the file

/usr/lib/systemd/system/corosync.service

which controls the service via "service corosync start/stop/etc". If you do not have the service enabled it will automatically be stopped after the manual start. The solution is to enable the service. I did not enable it until now because I always started the service manually but since the upgrade to 13.2 it is neccessary.