Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1153701

Summary: pcsd: removing all nodes at once leaves the cluster in broken state
Product: Red Hat Enterprise Linux 7 Reporter: Radek Steiger <rsteiger>
Component: pcsAssignee: Chris Feist <cfeist>
Status: CLOSED ERRATA QA Contact: Radek Steiger <rsteiger>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.1CC: cluster-maint, mjuricek, tojeline
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pcs-0.9.134-1.el7 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-03-05 09:20:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1184154    
Bug Blocks:    

Description Radek Steiger 2014-10-16 15:10:42 UTC
Description of problem:

Selecting and removing all nodes should ideally result in de-configuring (destroying) the whole cluster, however doing this results in only the first node to be removed, while the rest of the cluster will end up in a broken state.

It looks like the first node is being destroyed before any other node's removal is even attempted, causing the latter to fail as corosync configuration doesn't exist anymore. It might be important to note that the GUI has been opened from the first node.

Status after unsuccessful removal:

[root@virt-042 pcsd]# pcs status nodes both
Corosync Nodes:
 Online: virt-042 virt-043 virt-044 
 Offline: 
Pacemaker Nodes:
 Online: virt-042 virt-043 virt-044 
 Standby: 
 Offline: virt-041 


[root@virt-042 pcsd]# pcs status
Cluster name: r7cluster
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Thu Oct 16 16:53:00 2014
Last change: Thu Oct 16 16:52:29 2014
Stack: corosync
Current DC: virt-043 (3) - partition with quorum
Version: 1.1.12-a14efad
4 Nodes configured
0 Resources configured

Online: [ virt-042 virt-043 virt-044 ]
OFFLINE: [ virt-041 ]

Full list of resources:

PCSD Status:
  virt-042: Online
  virt-043: Online
  virt-044: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled




Version-Release number of selected component (if applicable):
pcs-0.9.130-1.el7.x86_64


How reproducible:
Always


Steps to Reproduce:
1. Check all nodes in node list
2. click Remove and confirm


Actual results:
Only the first node is stopped and removed from global corosync configuration, but not from pacemaker. The rest of the cluster is left running.


Expected results:
All nodes are stopped and de-configured properly.

Comment 3 Chris Feist 2014-10-17 21:59:30 UTC
Fixed upstream

https://github.com/feist/pcs/commit/34cac3b2554f0cbbe663ec9f2c3a3f8e4dad4a0e


Before upstream, when checking all nodes and removing only the first node was removed.

After upstream fix, checking all nodes would remove all of the nodes.

Comment 4 Tomas Jelinek 2014-10-21 14:33:29 UTC
Before Fix:
[root@rh70-node1 ~]# rpm -q pcs
pcs-0.9.130-1.el7.x86_64

In GUI check all nodes, click remove and confirm.

[root@rh70-node1:~]# pcs status nodes both
Error: Unable to read /etc/corosync/corosync.conf: No such file or directory
[root@rh70-node2:~]# pcs status nodes both
Corosync Nodes:
 Online: rh70-node2 
 Offline: 
Pacemaker Nodes:
 Online: rh70-node2 
 Standby: 
 Offline: rh70-node1


After Fix:
[root@rh70-node1:~]# rpm -q pcs
pcs-0.9.134-1.el7.x86_64

In GUI check all nodes, click remove and confirm.

[root@rh70-node1:~]# pcs status nodes both
Error: Unable to read /etc/corosync/corosync.conf: No such file or directory
[root@rh70-node2:~]# pcs status nodes both
Error: Unable to read /etc/corosync/corosync.conf: No such file or directory

Comment 11 errata-xmlrpc 2015-03-05 09:20:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0415.html