Bug 1254505 - gstatus: gstatus's connection field doesn't show the number of clients connected to the volume
gstatus: gstatus's connection field doesn't show the number of clients conne...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gstatus (Show other bugs)
3.1
All Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Sachidananda Urs
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-08-18 05:42 EDT by Anil Shah
Modified: 2016-10-28 09:12 EDT (History)
2 users (show)

See Also:
Fixed In Version: gstatus-0.65-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-10-28 09:12:02 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anil Shah 2015-08-18 05:42:32 EDT
Description of problem:

After mounting  volume on clients, gstatus command doesn't show the no. of clients connected to volumes.

Version-Release number of selected component (if applicable):

[root@localhost ~]# gstatus --version
gstatus 0.64

[root@localhost ~]# rpm -qa | grep glusterfs
glusterfs-api-3.7.1-11.el7rhgs.x86_64
glusterfs-cli-3.7.1-11.el7rhgs.x86_64
glusterfs-libs-3.7.1-11.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-11.el7rhgs.x86_64
glusterfs-server-3.7.1-11.el7rhgs.x86_64
glusterfs-rdma-3.7.1-11.el7rhgs.x86_64
glusterfs-3.7.1-11.el7rhgs.x86_64
glusterfs-fuse-3.7.1-11.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-11.el7rhgs.x86_64

How reproducible:

100%

Steps to Reproduce:
1. Create 6X2 distribute replicate volume
2. mount volume on multiple clients as fuse or NFS mount.
3. check gstatus command. e.g gstatus -a

Actual results:

gstatus doesn;t show the no. of clients volume is connected to.

[root@knightandday ~]# gstatus -a
 
     Product: RHGS vserver3.1    Capacity: 119.00 GiB(raw bricks)
      Status: UNHEALTHY(13)                198.00 MiB(raw used)
   Glusterfs: 3.7.1                         50.00 GiB(usable from volumes)
  OverCommit: Yes               Snapshots:   1

   Nodes       :  2/  4		  Volumes:   0 Up
   Self Heal   :  2/  4		             0 Up(Degraded)
   Bricks      :  6/ 12		             1 Up(Partial)
   Connections :  0/   0                     0 Down
Expected results:

gstatus connections field should should show number of clients volume is connected to.

Additional info:
Comment 2 Sachidananda Urs 2015-08-28 07:54:40 EDT
The `Connections:' field mentioned in the output does not list the number of
 clients per se but the number of connections to the volume. May it be nfs, 
self-heal daemon, client connections etc.

This can be easily verified by looking at the 
`gluster volume status all clients' output. Which is what the tool depends 
on to generate the output and pretty print.

With the current changes, the output will be displayed as:

Connections: num_of_unique_conn_per_node* / total_num_of_curr_conn_in_vol**

* Number of unique connections per node, where two processes connected to the
  volume is reported only once (even though they use different ports)
* Total number of current connections to the volume at the moment. (Sum of 
  connections from all nodes)

I'm not quite happy with the fix. I have proposed to the author of the tool 
to have this field removed from the output. This fix is interim till I get
a reply from him.

Example output:


[root@rhs-1 gstatus]# ./gstatus.py  -a

     Product: RHGS Server v3.1.1  Capacity: 398.00 GiB(raw bricks)
      Status: HEALTHY                        3.00 GiB(raw used)
   Glusterfs: 3.7.1                        199.00 GiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes       :  4/  4           Volumes:   1 Up
   Self Heal   :  4/  4                      0 Up(Degraded)
   Bricks      :  4/  4                      0 Up(Partial)
   Connections :  6/  40                     0 Down
Comment 3 Sachidananda Urs 2015-09-08 09:12:18 EDT
The following patch fixes this bug:

https://github.com/sachidanandaurs/gstatus/commit/4965c420b708e2b8f5e0458fa51d5f8e5ba363ac
Comment 4 Anil Shah 2015-09-10 06:27:50 EDT
Bug verified on build   glusterfs-3.7.1-14.el7rhgs.x86_64


[root@darkknight ~]# gstatus -a
 
     Product: Community          Capacity: 209.00 GiB(raw bricks)
      Status: HEALTHY                      330.00 MiB(raw used)
   Glusterfs: 3.7.1                        166.00 GiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes       :  4/  4		  Volumes:   2 Up
   Self Heal   :  4/  4		             0 Up(Degraded)
   Bricks      : 10/ 10		             0 Up(Partial)
   Connections :  4/  64                     0 Down

Volume Information
	dist-vol         UP - 4/4 bricks up - Distribute
	                 Capacity: (0% used) 132.00 MiB/80.00 GiB (used/total)
	                 Snapshots: 0
	                 Self Heal: N/A
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:on  SMB:on
	                 Gluster Connectivty: 4 hosts, 16 tcp connections

	ecvol            UP - 6/6 bricks up - Disperse
	                 Capacity: (0% used) 132.00 MiB/86.00 GiB (used/total)
	                 Snapshots: 0
	                 Self Heal:  6/ 6
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:on  SMB:on
	                 Gluster Connectivty: 4 hosts, 48 tcp connections

Note You need to log in before you can comment on or make changes to this bug.