Bug 1255365 - Enabling or disabling USS restarts NFS server process on all nodes
Enabling or disabling USS restarts NFS server process on all nodes
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gluster-nfs (Show other bugs)
3.1
Unspecified Unspecified
medium Severity high
: ---
: ---
Assigned To: Niels de Vos
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-08-20 07:34 EDT by nchilaka
Modified: 2016-02-18 05:51 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-01-29 06:53:51 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description nchilaka 2015-08-20 07:34:09 EDT
Description of problem:
-======================
1)if user enables/disables USS on a volume, the NFS server process are restarted on all the brick nodes.
This is not really a feature of high available filesystem.

I have seen that after uss is enabled and  while we mount a volume using NFS and access .snaps, if we turn off uss and enable again, user needs to come out of the mount path and again get into the mount path to access .snaps



Version-Release number of selected component (if applicable):
============================================================
[root@localhost ~]# gluster --version
glusterfs 3.7.1 built on Jul 19 2015 02:16:06
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@localhost ~]# cat /etc/redhat-*
Red Hat Enterprise Linux Server release 7.1 (Maipo)
Red Hat Gluster Storage Server 3.1
[root@localhost ~]# rpm -qa|grep gluster
glusterfs-api-3.7.1-11.el7rhgs.x86_64
glusterfs-cli-3.7.1-11.el7rhgs.x86_64
vdsm-gluster-4.16.20-1.2.el7rhgs.noarch
glusterfs-libs-3.7.1-11.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-11.el7rhgs.x86_64
glusterfs-server-3.7.1-11.el7rhgs.x86_64
glusterfs-rdma-3.7.1-11.el7rhgs.x86_64
gluster-nagios-common-0.2.0-2.el7rhgs.noarch
gluster-nagios-addons-0.2.4-4.el7rhgs.x86_64
glusterfs-3.7.1-11.el7rhgs.x86_64
glusterfs-fuse-3.7.1-11.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-11.el7rhgs.x86_64
[root@localhost ~]# 


Steps to Reproduce:
===================
1.create a volume and a snapshot
2. using vol status note down the nfs server processes
3)turn on uss
4)now again issue vol status, it can be seen nfs PIDs would have changed(nfs is killed and restarted)
5)now from nfs mount access .snaps folder and view files in the above created snapshot
6)now turn off and turn on uss 
7)It can be seen that .snaps is not accessible anymore
8)user has to again come out and enter the path to reaccess .snaps
Comment 2 Niels de Vos 2015-08-25 11:39:54 EDT
Do you have a tcpdump and/or output from rpcdebug taken on the NFS-client when this happens?

I'd also like to know if this only happens with NFS, or also with FUSE mountpoints.
Comment 3 Soumya Koduri 2016-01-29 06:53:51 EST
Maybe you should test it out by disabling nfs-client caching (using mount option noac). It is by design that gluster-nfs restarts whenever there is change in volfile (which includes enabling uss). Considering that gluster-nfs shall be deprecated sooner, re-designing it doesn't make much sense unless the expected behaviour is not achieved by using alternate solution (which is to use nfs-ganesha). Closing it.
Comment 4 nchilaka 2016-02-18 05:51:52 EST
I think it should be easily reproducible. I don't have a statedump for this

Note You need to log in before you can comment on or make changes to this bug.