Bug 807213 - 'gluster volume status <vol_name> nfs client' command reports wrong message
'gluster volume status <vol_name> nfs client' command reports wrong message
Status: CLOSED DEFERRED
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
mainline
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Kaushal
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-03-27 06:02 EDT by Shwetha Panduranga
Modified: 2015-12-01 11:45 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-04-25 02:08:38 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: DP
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shwetha Panduranga 2012-03-27 06:02:46 EDT
Description of problem:
gluster volume status <volume_name> nfs client command execution shows "Client Connected" as "0" even when there are nfs clients connected to the volume. 

Version-Release number of selected component (if applicable):
mainline

How reproducible:
often

Steps to Reproduce:
1.create a volume. Start the volume
2.execute "gluster volume status <volume_name> nfs client"

Actual results:

[03/27/12 - 19:59:25 root@APP-SERVER1 dstore1]# gluster volume status dstore nfs client
Client connections for volume dstore
----------------------------------------------
NFS Server : localhost
Clients connected : 0
----------------------------------------------
NFS Server : 192.168.2.36
Clients connected : 0
----------------------------------------------

[03/27/12 - 20:49:05 root@APP-SERVER1 ~]# ps -ef | grep nfs
root     31405     1  0 19:33 ?        00:00:14 /usr/local/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /etc/glusterd/nfs/run/nfs.pid -l /usr/local/var/log/glusterfs/nfs.log -S /tmp/995ec7752740c1876eba45d21e4c78ff.socket

[03/27/12 - 20:49:54 root@APP-SERVER2 ~]#  ps -ef | grep nfs
root      7386     1  0 19:33 ?        00:00:15 /usr/local/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /etc/glusterd/nfs/run/nfs.pid -l /usr/local/var/log/glusterfs/nfs.log -S /tmp/0888e95b7fd2736ab48aeb83267f201c.socket

[03/27/12 - 19:38:44 root@APP-CLIENT1 ~]# mount
192.168.2.35:/dstore on /mnt/nfsc1 type nfs (rw,vers=3,addr=192.168.2.35)
192.168.2.36:/dstore on /mnt/nfsc2 type nfs (rw,vers=3,addr=192.168.2.36)
Comment 1 Kaushal 2012-04-03 06:30:13 EDT
I am assuming that the NFS server was restarted or something caused the NFS server to restart (like a graph change) once the NFS mounts were created. 
In this case, this output is expected because of the way the NFS server is designed. The NFS server is stateless, which means that it generally does not maintain any information regarding the clients connecting to it. It recieves and responds to the request as and when it gets them.
The Gluster NFS server maintains a list of mounts, from which the "volume status" command extracts the clients list. Whenever the NFS server gets a new mount request, this list is updated with the new client. This list is updated and correct as long as the NFS server process is alive. If the process is killed and restarted this list gets reset to empty. NFS is designed in such a way that once clients have performed a mount, they can continue functioning without the need for a remount when a server process is restarted. Since a mount is not done again mountlist of the nfs server remains empty. Hence, the output of the "volume status" lists no clients.
To fix this behavior will need a change to the NFS server itself.

If my assumption was not the case, then I'll require more details.
Comment 2 Shwetha Panduranga 2012-04-25 01:28:02 EDT
As explained by kaushal, the nfs client connections are not shown when the nfs server is restarted . This behavior is verified. 

[04/25/12 - 10:48:08 root@APP-SERVER1 ~]# gluster volume status dstore nfs client
Client connections for volume dstore
----------------------------------------------
NFS Server : localhost
Clients connected : 1
Hostname                                               BytesRead    BytesWritten
--------                                               ---------    ------------
192.168.2.1:711                                                0               0
----------------------------------------------
NFS Server : 192.168.2.36
Clients connected : 0
----------------------------------------------
NFS Server : 192.168.2.37
Clients connected : 0
----------------------------------------------

[04/25/12 - 10:48:10 root@APP-SERVER1 ~]# gluster volume set dstore stat-prefetch off 
Set volume successful

[04/25/12 - 10:48:46 root@APP-SERVER1 ~]# gluster volume status dstore nfs client
Client connections for volume dstore
----------------------------------------------
NFS Server : localhost
Clients connected : 0
----------------------------------------------
NFS Server : 192.168.2.36
Clients connected : 0
----------------------------------------------
NFS Server : 192.168.2.37
Clients connected : 0
----------------------------------------------
Comment 3 Kaushal 2012-04-25 02:08:38 EDT
Closing this as deferred for now.
This needs to be documented as a known issue.

The documentation can be along the following lines,
The client status for Gluster NFS-server is not always consistent, ie., 0 clients are displayed even when there are clients connected to the server. The client information is obtained from a mount list maintained by the NFS-server, which is a list of clients that have mounted shares exported by the server. This list is not maintained across server restarts, eg. when the server is restarted because of a graph change which occurs due to starting/stopping of a volume. Due to the stateless design of the NFS protocol, once a client mounts a share, it can continue its operations without performing a new mount when server restarts. This results in the Gluster NFS server mount list not containing the clients which connected earlier, leading to inconsistent client status for NFS servers.

(PS: Not sure if the documentation needs to be so long)

Note You need to log in before you can comment on or make changes to this bug.