Cause: When selecting promotable clone instances for promotion on guest nodes, Pacemaker considered whether the guest node itself could run resources, but not whether the guest resource creating it was runnable.
Consequence: An unrunnable guest could be chosen for promotion, unnecessarily leaving some instances unpromoted until the next natural transition.
Fix: Pacemaker now considers whether a guest node's guest resource is runnable when selecting nodes for promotion.
Result: All instances that can be promoted will be.
Verified ,
[stack@undercloud-0 ~]$ ansible controller -b -mshell -a'rpm -qa|grep pacemaker-2'
[WARNING]: Found both group and host with same name: undercloud
[WARNING]: Consider using the yum, dnf or zypper module rather than running 'rpm'. If you need to use command because yum, dnf or zypper is insufficient you can add 'warn: false' to this
command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
controller-0 | CHANGED | rc=0 >>
pacemaker-2.0.4-6.el8_3.2.x86_64
controller-1 | CHANGED | rc=0 >>
pacemaker-2.0.4-6.el8_3.2.x86_64
controller-2 | CHANGED | rc=0 >>
pacemaker-2.0.4-6.el8_3.2.x86_64
[root@controller-0 ~]# pcs status |grep ovn
* GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 ovn-dbs-bundle-0@controller-0 ovn-dbs-bundle-1@controller-1 ovn-dbs-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]
* Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
* ovn-dbs-bundle-0 (ocf::ovn:ovndb-servers): Master controller-0
* ovn-dbs-bundle-1 (ocf::ovn:ovndb-servers): Slave controller-1
* ovn-dbs-bundle-2 (ocf::ovn:ovndb-servers): Slave controller-2
[root@controller-0 ~]# pcs resource ban ovn-dbs-bundle controller-0
Warning: Creating location constraint 'cli-ban-ovn-dbs-bundle-on-controller-0' with a score of -INFINITY for resource ovn-dbs-bundle on controller-0.
This will prevent ovn-dbs-bundle from running on controller-0 until the constraint is removed
This will be the case even if controller-0 is the last node in the cluster
[root@controller-0 ~]# crm_mon
[root@controller-0 ~]# pcs status |grep ovn
* Last change: Sun Mar 21 12:11:37 2021 by ovn-dbs-bundle-1 via crm_attribute on controller-1
* GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 ovn-dbs-bundle-1@controller-1 ovn-dbs-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ]
* Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]:
* ovn-dbs-bundle-0 (ocf::ovn:ovndb-servers): Stopped
* ovn-dbs-bundle-1 (ocf::ovn:ovndb-servers): Master controller-1
* ovn-dbs-bundle-2 (ocf::ovn:ovndb-servers): Slave controller-2
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2021:1088
Verified , [stack@undercloud-0 ~]$ ansible controller -b -mshell -a'rpm -qa|grep pacemaker-2' [WARNING]: Found both group and host with same name: undercloud [WARNING]: Consider using the yum, dnf or zypper module rather than running 'rpm'. If you need to use command because yum, dnf or zypper is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. controller-0 | CHANGED | rc=0 >> pacemaker-2.0.4-6.el8_3.2.x86_64 controller-1 | CHANGED | rc=0 >> pacemaker-2.0.4-6.el8_3.2.x86_64 controller-2 | CHANGED | rc=0 >> pacemaker-2.0.4-6.el8_3.2.x86_64 [root@controller-0 ~]# pcs status |grep ovn * GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 ovn-dbs-bundle-0@controller-0 ovn-dbs-bundle-1@controller-1 ovn-dbs-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ] * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]: * ovn-dbs-bundle-0 (ocf::ovn:ovndb-servers): Master controller-0 * ovn-dbs-bundle-1 (ocf::ovn:ovndb-servers): Slave controller-1 * ovn-dbs-bundle-2 (ocf::ovn:ovndb-servers): Slave controller-2 [root@controller-0 ~]# pcs resource ban ovn-dbs-bundle controller-0 Warning: Creating location constraint 'cli-ban-ovn-dbs-bundle-on-controller-0' with a score of -INFINITY for resource ovn-dbs-bundle on controller-0. This will prevent ovn-dbs-bundle from running on controller-0 until the constraint is removed This will be the case even if controller-0 is the last node in the cluster [root@controller-0 ~]# crm_mon [root@controller-0 ~]# pcs status |grep ovn * Last change: Sun Mar 21 12:11:37 2021 by ovn-dbs-bundle-1 via crm_attribute on controller-1 * GuestOnline: [ galera-bundle-0@controller-0 galera-bundle-1@controller-1 galera-bundle-2@controller-2 ovn-dbs-bundle-1@controller-1 ovn-dbs-bundle-2@controller-2 rabbitmq-bundle-0@controller-0 rabbitmq-bundle-1@controller-1 rabbitmq-bundle-2@controller-2 redis-bundle-0@controller-0 redis-bundle-1@controller-1 redis-bundle-2@controller-2 ] * Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]: * ovn-dbs-bundle-0 (ocf::ovn:ovndb-servers): Stopped * ovn-dbs-bundle-1 (ocf::ovn:ovndb-servers): Master controller-1 * ovn-dbs-bundle-2 (ocf::ovn:ovndb-servers): Slave controller-2