Bug 1654602 - Master/Slave bundle resource does not failover Master state across replicas [rhel-7.6.z]
Summary: Master/Slave bundle resource does not failover Master state across replicas [...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.6
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 7.6
Assignee: Ken Gaillot
QA Contact: pkomarov
URL:
Whiteboard:
Depends On: 1652752
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-29 08:20 UTC by RAD team bot copy to z-stream
Modified: 2018-12-19 06:16 UTC (History)
10 users (show)

Fixed In Version: pacemaker-1.1.19-8.el7_6.2
Doc Type: Bug Fix
Doc Text:
Previously, a clone notification scheduled for a Pacemaker Remote node or bundle that was disconnected sometimes blocked Pacemaker from all further cluster actions. With this update, notifications are scheduled correctly, and a notification on a disconnected remote connection does not prevent the cluster from taking further actions. As a result, the cluster continues to manage resources correctly.
Clone Of: 1652752
Environment:
Last Closed: 2018-12-19 06:16:09 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3847 None None None 2018-12-19 06:16:10 UTC

Description RAD team bot copy to z-stream 2018-11-29 08:20:00 UTC
This bug has been copied from bug #1652752 and has been proposed to be backported to 7.6 z-stream (EUS).

Comment 4 Ken Gaillot 2018-11-29 15:14:15 UTC
Fixed in upstream 1.1 branch by commit 32fac002

Comment 5 Ken Gaillot 2018-11-29 16:09:08 UTC
QA: Reproducer in description of parent Bug 1652752

Comment 8 pkomarov 2018-12-12 14:25:34 UTC
Verified, 

#conatiners build check pacemaker:
(undercloud) [stack@undercloud-0 ~]$ ansible controller -b -mshell -a'docker exec `docker ps -f name=redis-bundle -q`  sh -c "hostname -f;rpm -qa|grep pacemaker"'docker ps 


controller-3 | SUCCESS | rc=0 >>
controller-3.localdomain
puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch
pacemaker-1.1.19-8.el7_6.2.x86_64
pacemaker-libs-1.1.19-8.el7_6.2.x86_64
pacemaker-remote-1.1.19-8.el7_6.2.x86_64
pacemaker-cli-1.1.19-8.el7_6.2.x86_64
pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64

controller-2 | SUCCESS | rc=0 >>
controller-2.localdomain
puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch
pacemaker-1.1.19-8.el7_6.2.x86_64
pacemaker-libs-1.1.19-8.el7_6.2.x86_64
pacemaker-remote-1.1.19-8.el7_6.2.x86_64
pacemaker-cli-1.1.19-8.el7_6.2.x86_64
pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64

controller-1 | SUCCESS | rc=0 >>
controller-1.localdomain
puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch
pacemaker-1.1.19-8.el7_6.2.x86_64
pacemaker-libs-1.1.19-8.el7_6.2.x86_64
pacemaker-remote-1.1.19-8.el7_6.2.x86_64
pacemaker-cli-1.1.19-8.el7_6.2.x86_64
pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64

#overcloud build check pacemaker:
(undercloud) [stack@undercloud-0 ~]$ ansible controller -b -mshell -a'rpm -qa|grep pacemaker'


controller-3 | SUCCESS | rc=0 >>
pacemaker-libs-1.1.19-8.el7_6.2.x86_64
pacemaker-1.1.19-8.el7_6.2.x86_64
pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64
puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch
pacemaker-cli-1.1.19-8.el7_6.2.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch
pacemaker-remote-1.1.19-8.el7_6.2.x86_64

controller-1 | SUCCESS | rc=0 >>
pacemaker-1.1.19-8.el7_6.2.x86_64
pacemaker-cli-1.1.19-8.el7_6.2.x86_64
puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch
pacemaker-libs-1.1.19-8.el7_6.2.x86_64
pacemaker-remote-1.1.19-8.el7_6.2.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch
pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64

controller-2 | SUCCESS | rc=0 >>
pacemaker-libs-1.1.19-8.el7_6.2.x86_64
pacemaker-1.1.19-8.el7_6.2.x86_64
pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64
puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch
pacemaker-cli-1.1.19-8.el7_6.2.x86_64
ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch
pacemaker-remote-1.1.19-8.el7_6.2.x86_64


#check master->slave failover:

[root@controller-1 ~]# pcs status|grep redis

 Docker container set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest]
   redis-bundle-0	(ocf::heartbeat:redis):	Slave controller-3
   redis-bundle-1	(ocf::heartbeat:redis):	Master controller-1
   redis-bundle-2	(ocf::heartbeat:redis):	Slave controller-2


[root@controller-1 ~]#pcs resource ban redis-bundle controller-1

   redis-bundle-0	(ocf::heartbeat:redis): Slave controller-3
   redis-bundle-1	(ocf::heartbeat:redis): Demoting controller-1
   redis-bundle-2	(ocf::heartbeat:redis): Slave controller-2


   redis-bundle-0	(ocf::heartbeat:redis): Slave controller-3
   redis-bundle-1	(ocf::heartbeat:redis): Stopped controller-1
   redis-bundle-2	(ocf::heartbeat:redis): Slave controller-2

   redis-bundle-0	(ocf::heartbeat:redis): Master controller-3
   redis-bundle-1	(ocf::heartbeat:redis): Stopped
   redis-bundle-2	(ocf::heartbeat:redis): Slave controller-2

Comment 10 errata-xmlrpc 2018-12-19 06:16:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3847


Note You need to log in before you can comment on or make changes to this bug.