Description of problem: When all the nodes in cluster are up, gstatus command's OverCommit field show No, but when either of the node in the cluster is down, overcommit field shows Yes. Version-Release number of selected component (if applicable): [root@localhost ~]# gstatus --version gstatus 0.64 [root@localhost ~]# rpm -qa | grep glusterfs glusterfs-api-3.7.1-11.el7rhgs.x86_64 glusterfs-cli-3.7.1-11.el7rhgs.x86_64 glusterfs-libs-3.7.1-11.el7rhgs.x86_64 glusterfs-client-xlators-3.7.1-11.el7rhgs.x86_64 glusterfs-server-3.7.1-11.el7rhgs.x86_64 glusterfs-rdma-3.7.1-11.el7rhgs.x86_64 glusterfs-3.7.1-11.el7rhgs.x86_64 glusterfs-fuse-3.7.1-11.el7rhgs.x86_64 glusterfs-geo-replication-3.7.1-11.el7rhgs.x86_64 How reproducible: 100% Steps to Reproduce: 1.Create 6x2 distribute replicate volume 2. Mount volume as NFS/FUSE mount on client 3. check gstatus. e.g gstatus -a , it shows Overcommit field as NO 4. bring down one of the storage node 5. check gstatus . e.g gstatus -a Actual results: root@localhost ~]# gstatus -a Product: RHGS vserver3.1 Capacity: 239.00 GiB(raw bricks) Status: HEALTHY 396.00 MiB(raw used) Glusterfs: 3.7.1 119.00 GiB(usable from volumes) OverCommit: Yes Snapshots: 0 Nodes : 4/ 4 Volumes: 1 Up Self Heal : 4/ 4 0 Up(Degraded) Bricks : 12/ 12 0 Up(Partial) Connections : 0/ 0 0 Down Volume Information testvol UP - 12/12 bricks up - Distributed-Replicate Capacity: (0% used) 198.00 MiB/119.00 GiB (used/total) Snapshots: 0 Self Heal: 12/12 Tasks Active: None Protocols: glusterfs:on NFS:on SMB:on Gluster Connectivty: 0 hosts, 0 tcp connections Status Messages - Cluster is HEALTHY, all_bricks checks successful Expected results: When storage node is down, then overcommit should show the correct status for overcommit field. Additional info:
This, I believe is fixed in git by Paul. I am not able to reproduce this. Please check in the latest build and re-open if you still see the issue.
[root@rhs-client46 yum.repos.d]# gstatus -a Product: RHGS Server v3.1 Capacity: 4.50 TiB(raw bricks) Status: UNHEALTHY(3) 100.00 MiB(raw used) Glusterfs: 3.7.1 3.60 TiB(usable from volumes) OverCommit: No Snapshots: 0 Nodes : 3/ 4 Volumes: 0 Up Self Heal : 3/ 4 1 Up(Degraded) Bricks : 3/ 4 0 Up(Partial) Connections : 4/ 24 0 Down Volume Information vol0 UP(DEGRADED) - 3/4 bricks up - Distributed-Replicate Capacity: (0% used) 67.00 MiB/3.60 TiB (used/total) Snapshots: 0 Self Heal: 3/ 4 Tasks Active: None Protocols: glusterfs:on NFS:on SMB:on Gluster Connectivty: 4 hosts, 24 tcp connections Status Messages - Cluster is UNHEALTHY - One of the nodes in the cluster is down - Brick 10.70.36.71:/rhs/brick1/b02 in volume 'vol0' is down/unavailable - INFO -> Not all bricks are online, so capacity provided is NOT accurate [root@rhs-client46 yum.repos.d]# gstatus -b Product: RHGS Server v3.1 Capacity: 4.50 TiB(raw bricks) Status: UNHEALTHY(3) 100.00 MiB(raw used) Glusterfs: 3.7.1 3.60 TiB(usable from volumes) OverCommit: No Snapshots: 0 Bug verified on build glusterfs-3.7.1-14.el7rhgs.x86_64 [root@rhs-client46 yum.repos.d]# gstatus --version gstatus 0.65
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html