Bug 1405955

Summary: NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
Product: [Community] GlusterFS Reporter: Jiffin <jthottan>
Component: glusterdAssignee: Jiffin <jthottan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.7.18CC: amukherj, bugs, jthottan, kkeithle, rhinduja, rhs-bugs, sbhaloth, skoduri, storage-qa-internal
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.19 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1405951 Environment:
Last Closed: 2017-01-18 13:39:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1397450, 1397795, 1402366, 1405951    
Bug Blocks:    

Description Jiffin 2016-12-19 09:53:00 UTC
+++ This bug was initially created as a clone of Bug #1405951 +++

+++ This bug was initially created as a clone of Bug #1402366 +++

+++ This bug was initially created as a clone of Bug #1397795 +++

+++ This bug was initially created as a clone of Bug #1397450 +++

Description of problem:
********************************
When any of the volume option is reset, error is seen in the cli output and ganesha options are also reset with bringing down the ganesha services .

I executed gluster vol volname reset readdir-ahead

gluster vol reset ganesha readdir-ahead 
volume reset: success: Dynamic export addition/deletion failed. Please see log file for details



 pcs status
Cluster name: ganesha-ha-360
Stack: corosync
Current DC: dhcp47-147.lab.eng.blr.redhat.com (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Nov 22 18:15:59 2016		Last change: Tue Nov 22 18:15:42 2016 by root via crm_attribute on dhcp47-137.lab.eng.blr.redhat.com

4 nodes and 24 resources configured

Online: [ dhcp47-104.lab.eng.blr.redhat.com dhcp47-105.lab.eng.blr.redhat.com dhcp47-137.lab.eng.blr.redhat.com dhcp47-147.lab.eng.blr.redhat.com ]

Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
     Started: [ dhcp47-104.lab.eng.blr.redhat.com dhcp47-105.lab.eng.blr.redhat.com dhcp47-137.lab.eng.blr.redhat.com dhcp47-147.lab.eng.blr.redhat.com ]
 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ dhcp47-104.lab.eng.blr.redhat.com dhcp47-105.lab.eng.blr.redhat.com dhcp47-137.lab.eng.blr.redhat.com dhcp47-147.lab.eng.blr.redhat.com ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Stopped: [ dhcp47-104.lab.eng.blr.redhat.com dhcp47-105.lab.eng.blr.redhat.com dhcp47-137.lab.eng.blr.redhat.com dhcp47-147.lab.eng.blr.redhat.com ]
 Resource Group: dhcp47-147.lab.eng.blr.redhat.com-group
     dhcp47-147.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Stopped
     dhcp47-147.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped
     dhcp47-147.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Stopped
 Resource Group: dhcp47-137.lab.eng.blr.redhat.com-group
     dhcp47-137.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Stopped
     dhcp47-137.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped
     dhcp47-137.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Stopped
 Resource Group: dhcp47-104.lab.eng.blr.redhat.com-group
     dhcp47-104.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Stopped
     dhcp47-104.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped
     dhcp47-104.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Stopped
 Resource Group: dhcp47-105.lab.eng.blr.redhat.com-group
     dhcp47-105.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Stopped
     dhcp47-105.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped
     dhcp47-105.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Stopped



Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 7cc41702-7189-41d7-8931-051ad49ba1d1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: dhcp47-104.lab.eng.blr.redhat.com:/var/lib/glusterd/ss_brick
Brick2: dhcp47-105.lab.eng.blr.redhat.com:/var/lib/glusterd/ss_brick
Brick3: dhcp47-147.lab.eng.blr.redhat.com:/var/lib/glusterd/ss_brick
Options Reconfigured:
ganesha.enable: off
features.cache-invalidation: off
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
cluster.enable-shared-storage: enable
nfs-ganesha: enable




Version-Release number of selected component (if applicable):
nfs-ganesha-2.4.1-1.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64


How reproducible:
Always

Steps to Reproduce:
1.Create ganesha cluster setup
2. create a volume,start it
3. Execute gluster vol reset volname <any vol option>

Actual results:
It resets the ganesha enable option as well and brings ganesha and other services to stopped state and pcs status shows all nodes stopped.

Expected results:
The volume reset for specific option should only reset taht option and should not cause changes to ganesha related options and should not bring down the cluster.

--- Additional comment from Worker Ant on 2016-12-19 04:49:59 EST ---

REVIEW: http://review.gluster.org/16197 (glusterd/ganesha : handle volume reset properly for ganesha options) posted (#1) for review on release-3.8 by jiffin tony Thottan (jthottan)

Comment 1 Worker Ant 2016-12-19 10:00:49 UTC
REVIEW: http://review.gluster.org/16199 (glusterd/ganesha : handle volume reset properly for ganesha options) posted (#1) for review on release-3.7 by jiffin tony Thottan (jthottan)

Comment 2 Worker Ant 2016-12-23 12:12:07 UTC
COMMIT: http://review.gluster.org/16199 committed in release-3.7 by Kaleb KEITHLEY (kkeithle) 
------
commit 463f06e450798991080d98c6b0fbf195d7e70c93
Author: Jiffin Tony Thottan <jthottan>
Date:   Wed Nov 23 16:04:26 2016 +0530

    glusterd/ganesha : handle volume reset properly for ganesha options
    
    The "gluster volume reset" should first unexport the volume and then delete
    export configuration file. Also reset option is not applicable for ganesha.enable
    if volume value is "all".
    This patch also changes the name of create_export_config into manange_export_config
    
    Upstream reference :
    >Change-Id: Ie81a49e7d3e39a88bca9fbae5002bfda5cab34af
    >BUG: 1397795
    >Signed-off-by: Jiffin Tony Thottan <jthottan>
    >Reviewed-on: http://review.gluster.org/15914
    >Smoke: Gluster Build System <jenkins.org>
    >NetBSD-regression: NetBSD Build System <jenkins.org>
    >CentOS-regression: Gluster Build System <jenkins.org>
    >Reviewed-by: soumya k <skoduri>
    >Reviewed-by: Kaleb KEITHLEY <kkeithle>
    >Signed-off-by: Jiffin Tony Thottan <jthottan>
    
    Change-Id: Ie81a49e7d3e39a88bca9fbae5002bfda5cab34af
    BUG: 1405955
    Signed-off-by: Jiffin Tony Thottan <jthottan>
    Reviewed-on: http://review.gluster.org/16054
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Kaleb KEITHLEY <kkeithle>
    Signed-off-by: Jiffin Tony Thottan <jthottan>
    Reviewed-on: http://review.gluster.org/16199

Comment 3 Kaushal 2017-01-18 13:39:24 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.19, please open a new bug report.

glusterfs-3.7.19 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/gluster-users/2017-January/029623.html
[2] https://www.gluster.org/pipermail/gluster-users/