RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1522822 - Resources are not moved to different node once the node they were started on is reset ungracefully
Summary: Resources are not moved to different node once the node they were started on ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.4
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 7.5
Assignee: Ken Gaillot
QA Contact: Ofer Blaut
URL:
Whiteboard:
Depends On:
Blocks: 1527810
TreeView+ depends on / blocked
 
Reported: 2017-12-06 14:11 UTC by Marian Krcmarik
Modified: 2018-04-10 15:35 UTC (History)
7 users (show)

Fixed In Version: pacemaker-1.1.18-7
Doc Type: No Doc Update
Doc Text:
Previously, it was sometimes impossible to recover resources from container bundles elsewhere after the host node of the container failed. With this update, Pacemaker now handles ordering of nested Pacemaker Remote connections correctly. As a result, resource recovery proceeds normally for bundled resources.
Clone Of:
: 1527810 (view as bug list)
Environment:
Last Closed: 2018-04-10 15:34:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0860 0 None None None 2018-04-10 15:35:20 UTC

Description Marian Krcmarik 2017-12-06 14:11:38 UTC
Description of problem:
Resources are not moved to different node once the node they were started on is reset ungracefully in the cluster which consists of full pacemaker nodes (3) and some remote pacemaker nodes (6) and all the nodes hosts some bundle resources with container - The cluster is Openstack OSP12.

Version-Release number of selected component (if applicable):
puppet-pacemaker-0.6.0-2.el7ost.noarch
pacemaker-cli-1.1.16-12.el7_4.5.x86_64
ansible-pacemaker-1.0.3-2.el7ost.noarch
pacemaker-1.1.16-12.el7_4.5.x86_64
pacemaker-remote-1.1.16-12.el7_4.5.x86_64
pacemaker-libs-1.1.16-12.el7_4.5.x86_64
pacemaker-cluster-libs-1.1.16-12.el7_4.5.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create a cluster (OSP12) which has 3 full pacemaker nodes and 6 pacemaker remote nodes which host some container based bundles.
2. Configure and enable fencing.
3. Reset one of the full pacemaker node which has some resource started on it

Actual results:
resource which lost the node including remote connection resources will remain in stopped status and are not moved to different node.

Additional info:
$ sudo crm_simulate -SL

Current cluster status:
RemoteNode database-0: UNCLEAN (offline)
RemoteNode database-2: UNCLEAN (offline)
Online: [ controller-0 controller-1 controller-2 ]
RemoteOnline: [ database-1 messaging-0 messaging-1 messaging-2 ]
Containers: [ galera-bundle-1:galera-bundle-docker-1 rabbitmq-bundle-0:rabbitmq-bundle-docker-0 rabbitmq-bundle-1:rabbitmq-bundle-docker-1 rabbitmq-bundle-2:rabbitmq-bundle-docker-2 redis-bundle-0:redis-bundle-docker-0 redis-bundle-2:redis-bundle-docker-2 ]

 database-0	(ocf::pacemaker:remote):	Stopped
 database-1	(ocf::pacemaker:remote):	Started controller-2
 database-2	(ocf::pacemaker:remote):	Stopped
 messaging-0	(ocf::pacemaker:remote):	Started controller-2
 messaging-1	(ocf::pacemaker:remote):	Started controller-2
 messaging-2	(ocf::pacemaker:remote):	Started controller-2
 Docker container set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]
   rabbitmq-bundle-0	(ocf::heartbeat:rabbitmq-cluster):	Started messaging-0
   rabbitmq-bundle-1	(ocf::heartbeat:rabbitmq-cluster):	Started messaging-1
   rabbitmq-bundle-2	(ocf::heartbeat:rabbitmq-cluster):	Started messaging-2
 Docker container set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	FAILED Master database-0 (UNCLEAN)
   galera-bundle-1	(ocf::heartbeat:galera):	Master database-1
   galera-bundle-2	(ocf::heartbeat:galera):	FAILED Master database-2 (UNCLEAN)
 Docker container set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]
   redis-bundle-0	(ocf::heartbeat:redis):	Slave controller-0
   redis-bundle-1	(ocf::heartbeat:redis):	Stopped
   redis-bundle-2	(ocf::heartbeat:redis):	Slave controller-2
 ip-192.168.24.11	(ocf::heartbeat:IPaddr2):	Stopped
 ip-10.0.0.104	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.1.19	(ocf::heartbeat:IPaddr2):	Started controller-2
 ip-172.17.1.11	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.3.13	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.4.19	(ocf::heartbeat:IPaddr2):	Started controller-2
 Docker container set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]
   haproxy-bundle-docker-0	(ocf::heartbeat:docker):	Started controller-0
   haproxy-bundle-docker-1	(ocf::heartbeat:docker):	Stopped
   haproxy-bundle-docker-2	(ocf::heartbeat:docker):	Started controller-2
 openstack-cinder-volume	(systemd:openstack-cinder-volume):	Stopped
 stonith-fence_ipmilan-525400244e09	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400cdec10	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400c709f7	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400a7f9e0	(stonith:fence_ipmilan):	Started controller-0
 stonith-fence_ipmilan-525400a25787	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-5254005ea387	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400542c06	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400aac413	(stonith:fence_ipmilan):	Started controller-2
 stonith-fence_ipmilan-525400498d34	(stonith:fence_ipmilan):	Stopped

Transition Summary:
 * Fence (reboot) galera-bundle-2 (resource: galera-bundle-docker-2) 'guest is unclean'
 * Fence (reboot) galera-bundle-0 (resource: galera-bundle-docker-0) 'guest is unclean'
 * Start      database-0                             (                   controller-0 )  
 * Start      database-2                             (                   controller-1 )  
 * Recover    galera-bundle-docker-0                 (                     database-0 )  
 * Start      galera-bundle-0                        (                   controller-0 )  
 * Recover    galera:0                               (         Master galera-bundle-0 )  
 * Recover    galera-bundle-docker-2                 (                     database-2 )  
 * Start      galera-bundle-2                        (                   controller-1 )  
 * Recover    galera:2                               (         Master galera-bundle-2 )  
 * Promote    redis:0                                ( Slave -> Master redis-bundle-0 )  
 * Start      redis-bundle-docker-1                  (                   controller-1 )  
 * Start      redis-bundle-1                         (                   controller-1 )  
 * Start      redis:1                                (                 redis-bundle-1 )  
 * Start      ip-192.168.24.11                       (                   controller-0 )  
 * Start      ip-10.0.0.104                          (                   controller-1 )  
 * Start      ip-172.17.1.11                         (                   controller-0 )  
 * Start      ip-172.17.3.13                         (                   controller-1 )  
 * Start      haproxy-bundle-docker-1                (                   controller-1 )  
 * Start      openstack-cinder-volume                (                   controller-0 )  
 * Start      stonith-fence_ipmilan-525400244e09     (                   controller-1 )  
 * Start      stonith-fence_ipmilan-525400cdec10     (                   controller-0 )  
 * Start      stonith-fence_ipmilan-525400c709f7     (                   controller-1 )  
 * Start      stonith-fence_ipmilan-525400a25787     (                   controller-1 )  
 * Start      stonith-fence_ipmilan-5254005ea387     (                   controller-0 )  
 * Start      stonith-fence_ipmilan-525400542c06     (                   controller-1 )  
 * Start      stonith-fence_ipmilan-525400498d34     (                   controller-1 )  

Executing cluster transition:
 * Pseudo action:   redis-bundle-master_pre_notify_start_0
 * Pseudo action:   redis-bundle_start_0
 * Pseudo action:   galera-bundle_demote_0
 * Pseudo action:   galera-bundle-master_demote_0
 * Resource action: redis           notify on redis-bundle-0
 * Resource action: redis           notify on redis-bundle-2
 * Pseudo action:   redis-bundle-master_confirmed-pre_notify_start_0
 * Pseudo action:   redis-bundle-master_start_0
Transition failed: terminated
An invalid transition was produced

Revised cluster status:
RemoteNode database-0: UNCLEAN (offline)
RemoteNode database-2: UNCLEAN (offline)
Online: [ controller-0 controller-1 controller-2 ]
RemoteOnline: [ database-1 messaging-0 messaging-1 messaging-2 ]
Containers: [ galera-bundle-1:galera-bundle-docker-1 rabbitmq-bundle-0:rabbitmq-bundle-docker-0 rabbitmq-bundle-1:rabbitmq-bundle-docker-1 rabbitmq-bundle-2:rabbitmq-bundle-docker-2 redis-bundle-0:redis-bundle-docker-0 redis-bundle-2:redis-bundle-docker-2 ]

 database-0	(ocf::pacemaker:remote):	Stopped
 database-1	(ocf::pacemaker:remote):	Started controller-2
 database-2	(ocf::pacemaker:remote):	Stopped
 messaging-0	(ocf::pacemaker:remote):	Started controller-2
 messaging-1	(ocf::pacemaker:remote):	Started controller-2
 messaging-2	(ocf::pacemaker:remote):	Started controller-2
 Docker container set: rabbitmq-bundle [192.168.24.1:8787/rhosp12/openstack-rabbitmq-docker:pcmklatest]
   rabbitmq-bundle-0	(ocf::heartbeat:rabbitmq-cluster):	Started messaging-0
   rabbitmq-bundle-1	(ocf::heartbeat:rabbitmq-cluster):	Started messaging-1
   rabbitmq-bundle-2	(ocf::heartbeat:rabbitmq-cluster):	Started messaging-2
 Docker container set: galera-bundle [192.168.24.1:8787/rhosp12/openstack-mariadb-docker:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	FAILED Master database-0 (UNCLEAN)
   galera-bundle-1	(ocf::heartbeat:galera):	Master database-1
   galera-bundle-2	(ocf::heartbeat:galera):	FAILED Master database-2 (UNCLEAN)
 Docker container set: redis-bundle [192.168.24.1:8787/rhosp12/openstack-redis-docker:pcmklatest]
   redis-bundle-0	(ocf::heartbeat:redis):	Slave controller-0
   redis-bundle-1	(ocf::heartbeat:redis):	Stopped
   redis-bundle-2	(ocf::heartbeat:redis):	Slave controller-2
 ip-192.168.24.11	(ocf::heartbeat:IPaddr2):	Stopped
 ip-10.0.0.104	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.1.19	(ocf::heartbeat:IPaddr2):	Started controller-2
 ip-172.17.1.11	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.3.13	(ocf::heartbeat:IPaddr2):	Stopped
 ip-172.17.4.19	(ocf::heartbeat:IPaddr2):	Started controller-2
 Docker container set: haproxy-bundle [192.168.24.1:8787/rhosp12/openstack-haproxy-docker:pcmklatest]
   haproxy-bundle-docker-0	(ocf::heartbeat:docker):	Started controller-0
   haproxy-bundle-docker-1	(ocf::heartbeat:docker):	Stopped
   haproxy-bundle-docker-2	(ocf::heartbeat:docker):	Started controller-2
 openstack-cinder-volume	(systemd:openstack-cinder-volume):	Stopped
 stonith-fence_ipmilan-525400244e09	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400cdec10	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400c709f7	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400a7f9e0	(stonith:fence_ipmilan):	Started controller-0
 stonith-fence_ipmilan-525400a25787	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-5254005ea387	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400542c06	(stonith:fence_ipmilan):	Stopped
 stonith-fence_ipmilan-525400aac413	(stonith:fence_ipmilan):	Started controller-2
 stonith-fence_ipmilan-525400498d34	(stonith:fence_ipmilan):	Stopped

Comment 4 Andrew Beekhof 2017-12-08 03:00:29 UTC
We need all except the "Test:" at the end.

+ b82a555: Test: PE: Ensure stop operations occur after stopped remote connections have been brought up  (HEAD -> master, public/master, origin/master, origin/HEAD)
+ 7758ecb: Fix: PE: Ensure stop operations occur after stopped remote connections have been brought up 
+ 1c6c22a: Fix: PE: Remote connection resources are safe to to require only quorum 
+ 03041d7: Fix: PE: Only allowed nodes need to be considered when ordering resource startup after _all_ recovery 
+ d96e871: Fix: PE: Ordering bundle child stops/demotes after container fencing causes graph loops 
+ 33f50f6: Fix: PE: Passing boolean instead of a pointer

Comment 11 errata-xmlrpc 2018-04-10 15:34:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0860


Note You need to log in before you can comment on or make changes to this bug.