Bug 1405951 - NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
Summary: NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.8
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Jiffin
QA Contact:
URL:
Whiteboard:
Depends On: 1397450 1397795 1402366
Blocks: 1405955
TreeView+ depends on / blocked
 
Reported: 2016-12-19 09:36 UTC by Jiffin
Modified: 2017-01-16 12:27 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.8.8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1402366
: 1405955 (view as bug list)
Environment:
Last Closed: 2017-01-16 12:27:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Jiffin 2016-12-19 09:36:52 UTC
+++ This bug was initially created as a clone of Bug #1402366 +++

+++ This bug was initially created as a clone of Bug #1397795 +++

+++ This bug was initially created as a clone of Bug #1397450 +++

Description of problem:
********************************
When any of the volume option is reset, error is seen in the cli output and ganesha options are also reset with bringing down the ganesha services .

I executed gluster vol volname reset readdir-ahead

gluster vol reset ganesha readdir-ahead 
volume reset: success: Dynamic export addition/deletion failed. Please see log file for details



 pcs status
Cluster name: ganesha-ha-360
Stack: corosync
Current DC: dhcp47-147.lab.eng.blr.redhat.com (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Nov 22 18:15:59 2016		Last change: Tue Nov 22 18:15:42 2016 by root via crm_attribute on dhcp47-137.lab.eng.blr.redhat.com

4 nodes and 24 resources configured

Online: [ dhcp47-104.lab.eng.blr.redhat.com dhcp47-105.lab.eng.blr.redhat.com dhcp47-137.lab.eng.blr.redhat.com dhcp47-147.lab.eng.blr.redhat.com ]

Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
     Started: [ dhcp47-104.lab.eng.blr.redhat.com dhcp47-105.lab.eng.blr.redhat.com dhcp47-137.lab.eng.blr.redhat.com dhcp47-147.lab.eng.blr.redhat.com ]
 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ dhcp47-104.lab.eng.blr.redhat.com dhcp47-105.lab.eng.blr.redhat.com dhcp47-137.lab.eng.blr.redhat.com dhcp47-147.lab.eng.blr.redhat.com ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Stopped: [ dhcp47-104.lab.eng.blr.redhat.com dhcp47-105.lab.eng.blr.redhat.com dhcp47-137.lab.eng.blr.redhat.com dhcp47-147.lab.eng.blr.redhat.com ]
 Resource Group: dhcp47-147.lab.eng.blr.redhat.com-group
     dhcp47-147.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Stopped
     dhcp47-147.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped
     dhcp47-147.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Stopped
 Resource Group: dhcp47-137.lab.eng.blr.redhat.com-group
     dhcp47-137.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Stopped
     dhcp47-137.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped
     dhcp47-137.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Stopped
 Resource Group: dhcp47-104.lab.eng.blr.redhat.com-group
     dhcp47-104.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Stopped
     dhcp47-104.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped
     dhcp47-104.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Stopped
 Resource Group: dhcp47-105.lab.eng.blr.redhat.com-group
     dhcp47-105.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Stopped
     dhcp47-105.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped
     dhcp47-105.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Stopped



Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 7cc41702-7189-41d7-8931-051ad49ba1d1
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: dhcp47-104.lab.eng.blr.redhat.com:/var/lib/glusterd/ss_brick
Brick2: dhcp47-105.lab.eng.blr.redhat.com:/var/lib/glusterd/ss_brick
Brick3: dhcp47-147.lab.eng.blr.redhat.com:/var/lib/glusterd/ss_brick
Options Reconfigured:
ganesha.enable: off
features.cache-invalidation: off
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
cluster.enable-shared-storage: enable
nfs-ganesha: enable




Version-Release number of selected component (if applicable):
nfs-ganesha-2.4.1-1.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64


How reproducible:
Always

Steps to Reproduce:
1.Create ganesha cluster setup
2. create a volume,start it
3. Execute gluster vol reset volname <any vol option>

Actual results:
It resets the ganesha enable option as well and brings ganesha and other services to stopped state and pcs status shows all nodes stopped.

Expected results:
The volume reset for specific option should only reset taht option and should not cause changes to ganesha related options and should not bring down the cluster.

Comment 1 Worker Ant 2016-12-19 09:49:59 UTC
REVIEW: http://review.gluster.org/16197 (glusterd/ganesha : handle volume reset properly for ganesha options) posted (#1) for review on release-3.8 by jiffin tony Thottan (jthottan)

Comment 2 Worker Ant 2016-12-23 12:11:56 UTC
COMMIT: http://review.gluster.org/16197 committed in release-3.8 by Kaleb KEITHLEY (kkeithle) 
------
commit d513f41ef8089a9df2fe1240dd9f1952b9a41767
Author: Jiffin Tony Thottan <jthottan>
Date:   Wed Nov 23 16:04:26 2016 +0530

    glusterd/ganesha : handle volume reset properly for ganesha options
    
    The "gluster volume reset" should first unexport the volume and then delete
    export configuration file. Also reset option is not applicable for ganesha.enable
    if volume value is "all".
    This patch also changes the name of create_export_config into manange_export_config
    
    Upstream reference :
    >Change-Id: Ie81a49e7d3e39a88bca9fbae5002bfda5cab34af
    >BUG: 1397795
    >Signed-off-by: Jiffin Tony Thottan <jthottan>
    >Reviewed-on: http://review.gluster.org/15914
    >Smoke: Gluster Build System <jenkins.org>
    >NetBSD-regression: NetBSD Build System <jenkins.org>
    >CentOS-regression: Gluster Build System <jenkins.org>
    >Reviewed-by: soumya k <skoduri>
    >Reviewed-by: Kaleb KEITHLEY <kkeithle>
    >Signed-off-by: Jiffin Tony Thottan <jthottan>
    
    Change-Id: Ie81a49e7d3e39a88bca9fbae5002bfda5cab34af
    BUG: 1405951
    Signed-off-by: Jiffin Tony Thottan <jthottan>
    Reviewed-on: http://review.gluster.org/16054
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Kaleb KEITHLEY <kkeithle>
    Signed-off-by: Jiffin Tony Thottan <jthottan>
    Reviewed-on: http://review.gluster.org/16197

Comment 3 Niels de Vos 2017-01-16 12:27:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.8, please open a new bug report.

glusterfs-3.8.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2017-January/000064.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.