Description of problem: After mounting volume on clients, gstatus command doesn't show the no. of clients connected to volumes. Version-Release number of selected component (if applicable): [root@localhost ~]# gstatus --version gstatus 0.64 [root@localhost ~]# rpm -qa | grep glusterfs glusterfs-api-3.7.1-11.el7rhgs.x86_64 glusterfs-cli-3.7.1-11.el7rhgs.x86_64 glusterfs-libs-3.7.1-11.el7rhgs.x86_64 glusterfs-client-xlators-3.7.1-11.el7rhgs.x86_64 glusterfs-server-3.7.1-11.el7rhgs.x86_64 glusterfs-rdma-3.7.1-11.el7rhgs.x86_64 glusterfs-3.7.1-11.el7rhgs.x86_64 glusterfs-fuse-3.7.1-11.el7rhgs.x86_64 glusterfs-geo-replication-3.7.1-11.el7rhgs.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create 6X2 distribute replicate volume 2. mount volume on multiple clients as fuse or NFS mount. 3. check gstatus command. e.g gstatus -a Actual results: gstatus doesn;t show the no. of clients volume is connected to. [root@knightandday ~]# gstatus -a Product: RHGS vserver3.1 Capacity: 119.00 GiB(raw bricks) Status: UNHEALTHY(13) 198.00 MiB(raw used) Glusterfs: 3.7.1 50.00 GiB(usable from volumes) OverCommit: Yes Snapshots: 1 Nodes : 2/ 4 Volumes: 0 Up Self Heal : 2/ 4 0 Up(Degraded) Bricks : 6/ 12 1 Up(Partial) Connections : 0/ 0 0 Down Expected results: gstatus connections field should should show number of clients volume is connected to. Additional info:
The `Connections:' field mentioned in the output does not list the number of clients per se but the number of connections to the volume. May it be nfs, self-heal daemon, client connections etc. This can be easily verified by looking at the `gluster volume status all clients' output. Which is what the tool depends on to generate the output and pretty print. With the current changes, the output will be displayed as: Connections: num_of_unique_conn_per_node* / total_num_of_curr_conn_in_vol** * Number of unique connections per node, where two processes connected to the volume is reported only once (even though they use different ports) * Total number of current connections to the volume at the moment. (Sum of connections from all nodes) I'm not quite happy with the fix. I have proposed to the author of the tool to have this field removed from the output. This fix is interim till I get a reply from him. Example output: [root@rhs-1 gstatus]# ./gstatus.py -a Product: RHGS Server v3.1.1 Capacity: 398.00 GiB(raw bricks) Status: HEALTHY 3.00 GiB(raw used) Glusterfs: 3.7.1 199.00 GiB(usable from volumes) OverCommit: No Snapshots: 0 Nodes : 4/ 4 Volumes: 1 Up Self Heal : 4/ 4 0 Up(Degraded) Bricks : 4/ 4 0 Up(Partial) Connections : 6/ 40 0 Down
The following patch fixes this bug: https://github.com/sachidanandaurs/gstatus/commit/4965c420b708e2b8f5e0458fa51d5f8e5ba363ac
Bug verified on build glusterfs-3.7.1-14.el7rhgs.x86_64 [root@darkknight ~]# gstatus -a Product: Community Capacity: 209.00 GiB(raw bricks) Status: HEALTHY 330.00 MiB(raw used) Glusterfs: 3.7.1 166.00 GiB(usable from volumes) OverCommit: No Snapshots: 0 Nodes : 4/ 4 Volumes: 2 Up Self Heal : 4/ 4 0 Up(Degraded) Bricks : 10/ 10 0 Up(Partial) Connections : 4/ 64 0 Down Volume Information dist-vol UP - 4/4 bricks up - Distribute Capacity: (0% used) 132.00 MiB/80.00 GiB (used/total) Snapshots: 0 Self Heal: N/A Tasks Active: None Protocols: glusterfs:on NFS:on SMB:on Gluster Connectivty: 4 hosts, 16 tcp connections ecvol UP - 6/6 bricks up - Disperse Capacity: (0% used) 132.00 MiB/86.00 GiB (used/total) Snapshots: 0 Self Heal: 6/ 6 Tasks Active: None Protocols: glusterfs:on NFS:on SMB:on Gluster Connectivty: 4 hosts, 48 tcp connections