Bug 1654602
| Summary: | Master/Slave bundle resource does not failover Master state across replicas [rhel-7.6.z] | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | RAD team bot copy to z-stream <autobot-eus-copy> |
| Component: | pacemaker | Assignee: | Ken Gaillot <kgaillot> |
| Status: | CLOSED ERRATA | QA Contact: | pkomarov |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 7.6 | CC: | abeekhof, aherr, chjones, cluster-maint, dciabrin, kgaillot, mkrcmari, msuchane, pkomarov, salmy |
| Target Milestone: | rc | Keywords: | Regression, ZStream |
| Target Release: | 7.6 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | pacemaker-1.1.19-8.el7_6.2 | Doc Type: | Bug Fix |
| Doc Text: |
Previously, a clone notification scheduled for a Pacemaker Remote node or bundle that was disconnected sometimes blocked Pacemaker from all further cluster actions. With this update, notifications are scheduled correctly, and a notification on a disconnected remote connection does not prevent the cluster from taking further actions. As a result, the cluster continues to manage resources correctly.
|
Story Points: | --- |
| Clone Of: | 1652752 | Environment: | |
| Last Closed: | 2018-12-19 06:16:09 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1652752 | ||
| Bug Blocks: | |||
|
Description
RAD team bot copy to z-stream
2018-11-29 08:20:00 UTC
Fixed in upstream 1.1 branch by commit 32fac002 QA: Reproducer in description of parent Bug 1652752 Verified, #conatiners build check pacemaker: (undercloud) [stack@undercloud-0 ~]$ ansible controller -b -mshell -a'docker exec `docker ps -f name=redis-bundle -q` sh -c "hostname -f;rpm -qa|grep pacemaker"'docker ps controller-3 | SUCCESS | rc=0 >> controller-3.localdomain puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch pacemaker-1.1.19-8.el7_6.2.x86_64 pacemaker-libs-1.1.19-8.el7_6.2.x86_64 pacemaker-remote-1.1.19-8.el7_6.2.x86_64 pacemaker-cli-1.1.19-8.el7_6.2.x86_64 pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64 controller-2 | SUCCESS | rc=0 >> controller-2.localdomain puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch pacemaker-1.1.19-8.el7_6.2.x86_64 pacemaker-libs-1.1.19-8.el7_6.2.x86_64 pacemaker-remote-1.1.19-8.el7_6.2.x86_64 pacemaker-cli-1.1.19-8.el7_6.2.x86_64 pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64 controller-1 | SUCCESS | rc=0 >> controller-1.localdomain puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch pacemaker-1.1.19-8.el7_6.2.x86_64 pacemaker-libs-1.1.19-8.el7_6.2.x86_64 pacemaker-remote-1.1.19-8.el7_6.2.x86_64 pacemaker-cli-1.1.19-8.el7_6.2.x86_64 pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64 #overcloud build check pacemaker: (undercloud) [stack@undercloud-0 ~]$ ansible controller -b -mshell -a'rpm -qa|grep pacemaker' controller-3 | SUCCESS | rc=0 >> pacemaker-libs-1.1.19-8.el7_6.2.x86_64 pacemaker-1.1.19-8.el7_6.2.x86_64 pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64 puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch pacemaker-cli-1.1.19-8.el7_6.2.x86_64 ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch pacemaker-remote-1.1.19-8.el7_6.2.x86_64 controller-1 | SUCCESS | rc=0 >> pacemaker-1.1.19-8.el7_6.2.x86_64 pacemaker-cli-1.1.19-8.el7_6.2.x86_64 puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch pacemaker-libs-1.1.19-8.el7_6.2.x86_64 pacemaker-remote-1.1.19-8.el7_6.2.x86_64 ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64 controller-2 | SUCCESS | rc=0 >> pacemaker-libs-1.1.19-8.el7_6.2.x86_64 pacemaker-1.1.19-8.el7_6.2.x86_64 pacemaker-cluster-libs-1.1.19-8.el7_6.2.x86_64 puppet-pacemaker-0.7.2-0.20180423212253.el7ost.noarch pacemaker-cli-1.1.19-8.el7_6.2.x86_64 ansible-pacemaker-1.0.4-0.20180220234310.0e4d7c0.el7ost.noarch pacemaker-remote-1.1.19-8.el7_6.2.x86_64 #check master->slave failover: [root@controller-1 ~]# pcs status|grep redis Docker container set: redis-bundle [192.168.24.1:8787/rhosp13/openstack-redis:pcmklatest] redis-bundle-0 (ocf::heartbeat:redis): Slave controller-3 redis-bundle-1 (ocf::heartbeat:redis): Master controller-1 redis-bundle-2 (ocf::heartbeat:redis): Slave controller-2 [root@controller-1 ~]#pcs resource ban redis-bundle controller-1 redis-bundle-0 (ocf::heartbeat:redis): Slave controller-3 redis-bundle-1 (ocf::heartbeat:redis): Demoting controller-1 redis-bundle-2 (ocf::heartbeat:redis): Slave controller-2 redis-bundle-0 (ocf::heartbeat:redis): Slave controller-3 redis-bundle-1 (ocf::heartbeat:redis): Stopped controller-1 redis-bundle-2 (ocf::heartbeat:redis): Slave controller-2 redis-bundle-0 (ocf::heartbeat:redis): Master controller-3 redis-bundle-1 (ocf::heartbeat:redis): Stopped redis-bundle-2 (ocf::heartbeat:redis): Slave controller-2 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3847 |