0
votes

I am trying to write a trivial pacemaker Master/Slave system. I created an agent, it's metadata follows:

elm_meta_data() {
  cat <<EOF
<?xml version="1.0"?>
<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
<resource-agent name="elm-agent">
  <version>0.1</version>
  <longdesc lang="en">
    Resource agent for ELM high availability clusters.
  </longdesc>
  <shortdesc>
    Resource agent for ELM
  </shortdesc>
  <parameters>
    <parameter name="datadir" unique="0" required="1">
      <longdesc lang="en">
      Data directory
      </longdesc>
      <shortdesc lang="en">Data directory</shortdesc>
      <content type="string"/>
    </parameter>
  </parameters>
  <actions>
    <action name="start"        timeout="35" />
    <action name="stop"         timeout="35" />
    <action name="monitor"      timeout="35"
                                interval="10" depth="0" />
    <action name="monitor"      timeout="35"
                                interval="10" depth="0" role="Master" />
    <action name="monitor"      timeout="35"
                                interval="11" depth="0" role="Slave" />
    <action name="reload"       timeout="70" />
    <action name="meta-data"    timeout="5" />
    <action name="promote"      timeout="20" />
    <action name="demote"       timeout="20" />
    <action name="validate-all" timeout="20" />
    <action name="notify"       timeout="20" />
  </actions>
</resource-agent>
EOF
}

My monitor, promote, demote are:

elm_monitor() {
  local elm_running
  local worker_running
  local is_master

  elm_running=0
  worker_running=0
  is_master=0

  if [ -e "${OCF_RESKEY_datadir}/master.conf" ]; then
    is_master=1
  fi

  if [ "$(docker ps -q -f name=elm_web)" ]; then
    elm_running=1
  fi
  if [ "$(docker ps -q -f name=elm_worker)" ]; then
    worker_running=1
  fi
  if [ $elm_running -ne $worker_running ]; then
    if [ $is_master -eq 1 ]; then
      exit $OCF_FAILED_MASTER
    fi
    exit $OCF_ERR_GENERIC
  fi
  if [ $elm_running -eq 0 ]; then
    return $OCF_NOT_RUNNING
  fi
  ...
  if [ $is_master -eq 1 ]; then
    exit $OCF_FAILED_MASTER
  fi
  exit $OCF_ERR_GENERIC
}
elm_promote() {
  touch ${OCF_RESKEY_datadir}/master.conf
  return $OCF_SUCCESS
}

elm_demote() {
  rm ${OCF_RESKEY_datadir}/master.conf
  return $OCF_SUCCESS
}

If I configure the cluster with the following cib commands it get three slaves and no master:

sudo pcs cluster cib cluster1.xml

sudo pcs -f cluster1.xml resource create elmd ocf:a10:elm    \
    datadir="/etc/a10/elm"                                  \
    op start timeout=90s                                     \
    op stop timeout=90s                                      \
    op promote timeout=60s                                   \
    op demote timeout=60s                                    \
    op monitor interval=15s timeout=35s role="Master"        \
    op monitor interval=16s timeout=35s role="Slave"         \
    op notify timeout=60s

sudo pcs -f cluster1.xml resource master elm-ha elmd notify=true
sudo pcs -f cluster1.xml resource create ClusterIP ocf:heartbeat:IPaddr2 ip=$vip cidr_netmask=$net_mask op monitor interval=10s

sudo pcs -f cluster1.xml constraint colocation add ClusterIP with master elm-ha INFINITY
sudo pcs -f cluster1.xml constraint order promote elm-ha then start ClusterIP symmetrical=false kind=Mandatory
sudo pcs -f cluster1.xml constraint order demote elm-ha then stop ClusterIP symmetrical=false kind=Mandatory
sudo pcs cluster cib-push cluster1.xml

ubuntu@elm1:~$ sudo pcs status
...
 elm_proxmox_fence100   (stonith:fence_pve):    Started elm1
 elm_proxmox_fence101   (stonith:fence_pve):    Started elm2
 elm_proxmox_fence103   (stonith:fence_pve):    Started elm3
 Master/Slave Set: elm-ha [elmd]
     Slaves: [ elm1 elm2 elm3 ]
 ClusterIP  (ocf::heartbeat:IPaddr2):   Stopped

Whereas if I add the following command to the cib, I get a master/slave setup:

sudo pcs -f cluster1.xml constraint location elm-ha rule role=master \#uname eq $(hostname)

   Master/Slave Set: elm-ha [elmd]
       Masters: [ elm1 ]
       Slaves: [ elm2 elm3 ]
   ClusterIP    (ocf::heartbeat:IPaddr2):   Started elm1

But on this last version, the master seems to stick to elm1. When I test a failure, by stopping the corosync service on the master, I end up with 2 slaves and the master in a stopped state. I am guessing that setting the rule is forcing pacemaker to keep the master on the elm1.

     Master/Slave Set: elm-ha [elmd]
         Slaves: [ elm2 elm3 ]
         Stopped: [ elm1 ]
     ClusterIP  (ocf::heartbeat:IPaddr2):   Stopped

How do I configure this so that when I send my cib commands, it will pick a master and have a failover if the master goes down? Do I need something different in my agent?

1

1 Answers

0
votes

I finally found the answer in the documentation. I was failing to set the master preference in the monitor() method.

crm_master -l reboot -v 100