Bug 1394882 - Failed to enable nfs-ganesha after disabling nfs-ganesha cluster
Summary: Failed to enable nfs-ganesha after disabling nfs-ganesha cluster
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: common-ha
Version: 3.9
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Kaleb KEITHLEY
QA Contact:
URL:
Whiteboard:
Depends On: 1392895 1394881
Blocks: 1394883
TreeView+ depends on / blocked
 
Reported: 2016-11-14 16:00 UTC by Kaleb KEITHLEY
Modified: 2017-03-08 10:18 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.9.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1394881
: 1394883 (view as bug list)
Environment:
Last Closed: 2017-01-24 11:38:49 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kaleb KEITHLEY 2016-11-14 16:00:55 UTC
+++ This bug was initially created as a clone of Bug #1394881 +++

+++ This bug was initially created as a clone of Bug #1392895 +++

Description of problem:
Failed to enable nfs-ganesha after disabling nfs-ganesha cluster

Version-Release number of selected component (if applicable):
nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-3.el7rhgs.x86_64


How reproducible:
Always

Steps to Reproduce:
1. Create nfs-ganesha cluster.
2. Disable nfs ganesha.
3. Try to enable nfs-ganesha again.


Actual results:
Failed to enable nfs-ganesha after disabling nfs-ganesha cluster

Expected results:
nfs-ganesha should be enabled.

Additional info:

Command line log snippet:

[root@dhcp46-111 ~]# gluster nfs-ganesha disable
Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha : success 
[root@dhcp46-111 ~]# 
[root@dhcp46-111 ~]# 
[root@dhcp46-111 ~]# gluster nfs-ganesha enable
Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha: failed: Failed to set up HA config for NFS-Ganesha. Please check the log file for details


/var/log/messages snippet:

Nov  8 17:49:30 dhcp46-111 logger: setting up cluster G1478590172.02 with the following dhcp46-111.lab.eng.blr.redhat.com dhcp46-115.lab.eng.blr.redhat.com dhcp46-139.lab.eng.blr.redhat.com dhcp46-124.lab.eng.blr.redhat.com
Nov  8 17:49:35 dhcp46-111 logger: pcs cluster setup --name G1478590172.02 dhcp46-111.lab.eng.blr.redhat.com dhcp46-115.lab.eng.blr.redhat.com dhcp46-139.lab.eng.blr.redhat.com dhcp46-124.lab.eng.blr.redhat.com failed

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-08 07:31:09 EST ---

This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Arthy Loganathan on 2016-11-08 07:32 EST ---



--- Additional comment from Soumya Koduri on 2016-11-08 07:47:45 EST ---

Could you please provide access to your machine. Last night when I looked at it, it complained about other nodes being already part of the cluster. Once we remove /etc/corosync/corosync.conf file manually, the setup succeeds. 

There might have been some changes in the latest RHEL 7.3 pacemaker/corosync packages as this was not the case in the previous version. Could you please confirm that?

--- Additional comment from Arthy Loganathan on 2016-11-09 00:46:08 EST ---

Please access the machine with below details,
IP : 10.70.46.111.
Credentials : root/redhat

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-14 05:07:24 EST ---

This bug is automatically being provided 'pm_ack+' for the release flag 'rhgs‑3.2.0', the current release of Red Hat Gluster Storage 3 under active development, having been appropriately marked for the release, and having been provided ACK from Development and QE

If the 'blocker' flag had been proposed/set on this BZ, it has now been unset, since the 'blocker' flag is not valid for the current phase of RHGS 3.2.0 development

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-14 06:53:12 EST ---

Since this bug has been approved for the RHGS 3.2.0 release of Red Hat Gluster Storage 3, through release flag 'rhgs-3.2.0+', and through the Internal Whiteboard entry of '3.2.0', the Target Release is being automatically set to 'RHGS 3.2.0'

Comment 1 Worker Ant 2016-11-14 16:22:56 UTC
REVIEW: http://review.gluster.org/15844 (common-ha: remove /etc/corosync/corosync.conf in teardown/cleanup) posted (#1) for review on release-3.9 by Kaleb KEITHLEY (kkeithle)

Comment 2 Worker Ant 2016-11-15 11:35:19 UTC
REVIEW: http://review.gluster.org/15844 (common-ha: remove /etc/corosync/corosync.conf in teardown/cleanup) posted (#2) for review on release-3.9 by Kaleb KEITHLEY (kkeithle)

Comment 3 Worker Ant 2016-11-15 12:41:53 UTC
REVIEW: http://review.gluster.org/15844 (common-ha: remove /etc/corosync/corosync.conf in teardown/cleanup) posted (#3) for review on release-3.9 by Kaleb KEITHLEY (kkeithle)

Comment 4 Worker Ant 2016-11-15 19:30:57 UTC
COMMIT: http://review.gluster.org/15844 committed in release-3.9 by Kaleb KEITHLEY (kkeithle) 
------
commit b13c2af3e970990537e66d00a107a61b8c3fa643
Author: Kaleb S. KEITHLEY <kkeithle>
Date:   Mon Nov 14 11:21:49 2016 -0500

    common-ha: remove /etc/corosync/corosync.conf in teardown/cleanup
    
    In newer versions of corosync we observe that after tearing down an
    existing HA cluster, when trying to set up a new cluster, `pcs cluster
    start --all` will fail if corosync believes the nodes are already in
    the cluster based on the presence of, and the contents of
    /etc/corosync/corosync.conf
    
    So we summarily delete it. (An alternative/work-around is to use `pcs
    cluster start --force --all`)
    
    Change-Id: I225f4e35e3b605e860ec4f9537c40ed94ac68625
    BUG: 1394882
    Signed-off-by: Kaleb S. KEITHLEY <kkeithle>
    Reviewed-on: http://review.gluster.org/15844
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: soumya k <skoduri>

Comment 5 Kaushal 2017-03-08 10:18:51 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.1, please open a new bug report.

glusterfs-3.9.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-January/029725.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.