0
votes

I have a large manifest that sets up an OpenStack Controller node that also has HAProxy, Galera and RabbitMQ co-located on the same host. I am running into problems because the HAProxy service always seems to be the last thing to start. This creates a problem as I am suppose to connect to the Galera DB cluster through the HAProxies VIP. So all the attempts by the various OpenStack services are not able to set up their relational database tables. This would not be a problem if the HAProxy service starts first. I have tried all kinds of things to get my puppet manifest to force application of my HAProxy profile before anything else. Here's a couple of things I have tried:

    #Service <|name=='haproxy' or name=='keepalived'|> -> Exec <| |>
    #Exec<| title == 'nova-db-sync' |> ~> Service<| title == $nova_title |>
    #Service<| title == 'haproxy' |> ~> Exec<| title == 'nova-db-sync' |>
    #Service<| title == 'haproxy' |> ~> Exec<| title == 'Exec[apic-ml2-db-manage --config-file /etc/neutron/neutron.conf upgrade head]' |>

     Service['haproxy'] -> Exec['keystone-manage db_sync']
    #Class ['wraphaproxy'] -> Class ['wrapgalera'] -> Class['wraprabbitmq'] -> Class['keystone']
# -> Class['wraprabbitmq'] -> Class['keystone']
#
#   setup HAproxy
notify { "This host ${ipaddress} is in ${haproxy_ipaddress}":}
    if ( $ipaddress in $haproxy_ipaddress ) {
notify { "This host is an haproxy host ${ipaddress} is in ${haproxy_ipaddress}":}
       require wraphaproxy
#       class { 'wraphaproxy':
#           before => [
#              Class['wrapgalera'],
#              Class['wraprabbitmq'],
#              Class['keystone'],
#              Class['glance'],
#              Class['cinder'],
#              Class['neutron'],
#              Class['nova'],
#              Class['ceilometer'],
#              Class['horizon'],
#              Class['heat'],
#           ]
#       }
    }

The class wraphaproxy is the class that configures and starts the HAProxy service. It seems that no matter what I do the OpenStack Puppet modules attempt to do their "db sync's" before the HAProxy service is ready.

1
The question as worded implies that Puppet is executing the resource in the correct order, but that the service is not starting in real time when you want it to. Is this correct? - Matt Schuchard
Hi Matt, Thanks for the reply. I would like to ensure that HAProxy/Galera/RabbitMQ where up and running before the OpenStack services (Keystone,Cinder, Glance, Nova,Neutron, Heat and Ceilometer) attempted to do their "db sync's". - Red Cricket
As a very poor workaround I remove the OpenStack services configs likes like /etc/keystone/keystone.conf after HAProxy has started and re-run puppet agent -t. This regenerates the configs and kicks off a "db sync" for the service. - Red Cricket

1 Answers

0
votes

OK. It turns out that I need to use HAProxy::Service['haproxy'] instead of just Service['haproxy']. So I have this in my code:

 Haproxy::Service['haproxy'] -> Exec['keystone-manage db_sync']
 Haproxy::Service['haproxy'] -> Exec['glance-manage db_sync']
 Haproxy::Service['haproxy'] -> Exec['nova-db-sync']
 Haproxy::Service['haproxy'] -> Exec['glance-manage db_sync']
 Haproxy::Service['haproxy'] -> Exec['neutron-db-sync']
 Haproxy::Service['haproxy'] -> Exec['heat-dbsync']
 Haproxy::Service['haproxy'] -> Exec['ceilometer-dbsync']
 Haproxy::Service['haproxy'] -> Exec['cinder-manage db_sync']

Please if someone out there knows of a better way by maybe using anchors or resource collectors please reply.