Bug 1254505 - gstatus: gstatus's connection field doesn't show the number of clients connected to the volume
Summary: gstatus: gstatus's connection field doesn't show the number of clients conne...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gstatus
Version: rhgs-3.1
Hardware: All
OS: Linux
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Sachidananda Urs
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-08-18 09:42 UTC by Anil Shah
Modified: 2016-10-28 13:12 UTC (History)
2 users (show)

Fixed In Version: gstatus-0.65-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-10-28 13:12:02 UTC
Embargoed:


Attachments (Terms of Use)

Description Anil Shah 2015-08-18 09:42:32 UTC
Description of problem:

After mounting  volume on clients, gstatus command doesn't show the no. of clients connected to volumes.

Version-Release number of selected component (if applicable):

[root@localhost ~]# gstatus --version
gstatus 0.64

[root@localhost ~]# rpm -qa | grep glusterfs
glusterfs-api-3.7.1-11.el7rhgs.x86_64
glusterfs-cli-3.7.1-11.el7rhgs.x86_64
glusterfs-libs-3.7.1-11.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-11.el7rhgs.x86_64
glusterfs-server-3.7.1-11.el7rhgs.x86_64
glusterfs-rdma-3.7.1-11.el7rhgs.x86_64
glusterfs-3.7.1-11.el7rhgs.x86_64
glusterfs-fuse-3.7.1-11.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-11.el7rhgs.x86_64

How reproducible:

100%

Steps to Reproduce:
1. Create 6X2 distribute replicate volume
2. mount volume on multiple clients as fuse or NFS mount.
3. check gstatus command. e.g gstatus -a

Actual results:

gstatus doesn;t show the no. of clients volume is connected to.

[root@knightandday ~]# gstatus -a
 
     Product: RHGS vserver3.1    Capacity: 119.00 GiB(raw bricks)
      Status: UNHEALTHY(13)                198.00 MiB(raw used)
   Glusterfs: 3.7.1                         50.00 GiB(usable from volumes)
  OverCommit: Yes               Snapshots:   1

   Nodes       :  2/  4		  Volumes:   0 Up
   Self Heal   :  2/  4		             0 Up(Degraded)
   Bricks      :  6/ 12		             1 Up(Partial)
   Connections :  0/   0                     0 Down
Expected results:

gstatus connections field should should show number of clients volume is connected to.

Additional info:

Comment 2 Sachidananda Urs 2015-08-28 11:54:40 UTC
The `Connections:' field mentioned in the output does not list the number of
 clients per se but the number of connections to the volume. May it be nfs, 
self-heal daemon, client connections etc.

This can be easily verified by looking at the 
`gluster volume status all clients' output. Which is what the tool depends 
on to generate the output and pretty print.

With the current changes, the output will be displayed as:

Connections: num_of_unique_conn_per_node* / total_num_of_curr_conn_in_vol**

* Number of unique connections per node, where two processes connected to the
  volume is reported only once (even though they use different ports)
* Total number of current connections to the volume at the moment. (Sum of 
  connections from all nodes)

I'm not quite happy with the fix. I have proposed to the author of the tool 
to have this field removed from the output. This fix is interim till I get
a reply from him.

Example output:


[root@rhs-1 gstatus]# ./gstatus.py  -a

     Product: RHGS Server v3.1.1  Capacity: 398.00 GiB(raw bricks)
      Status: HEALTHY                        3.00 GiB(raw used)
   Glusterfs: 3.7.1                        199.00 GiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes       :  4/  4           Volumes:   1 Up
   Self Heal   :  4/  4                      0 Up(Degraded)
   Bricks      :  4/  4                      0 Up(Partial)
   Connections :  6/  40                     0 Down

Comment 3 Sachidananda Urs 2015-09-08 13:12:18 UTC
The following patch fixes this bug:

https://github.com/sachidanandaurs/gstatus/commit/4965c420b708e2b8f5e0458fa51d5f8e5ba363ac

Comment 4 Anil Shah 2015-09-10 10:27:50 UTC
Bug verified on build   glusterfs-3.7.1-14.el7rhgs.x86_64


[root@darkknight ~]# gstatus -a
 
     Product: Community          Capacity: 209.00 GiB(raw bricks)
      Status: HEALTHY                      330.00 MiB(raw used)
   Glusterfs: 3.7.1                        166.00 GiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes       :  4/  4		  Volumes:   2 Up
   Self Heal   :  4/  4		             0 Up(Degraded)
   Bricks      : 10/ 10		             0 Up(Partial)
   Connections :  4/  64                     0 Down

Volume Information
	dist-vol         UP - 4/4 bricks up - Distribute
	                 Capacity: (0% used) 132.00 MiB/80.00 GiB (used/total)
	                 Snapshots: 0
	                 Self Heal: N/A
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:on  SMB:on
	                 Gluster Connectivty: 4 hosts, 16 tcp connections

	ecvol            UP - 6/6 bricks up - Disperse
	                 Capacity: (0% used) 132.00 MiB/86.00 GiB (used/total)
	                 Snapshots: 0
	                 Self Heal:  6/ 6
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:on  SMB:on
	                 Gluster Connectivty: 4 hosts, 48 tcp connections


Note You need to log in before you can comment on or make changes to this bug.