Description of problem: when running gstatus command with -b argument, it gives error. Version-Release number of selected component (if applicable): [root@knightandday ~]# rpm -qa | grep glusterfs glusterfs-api-3.7.1-11.el7rhgs.x86_64 glusterfs-cli-3.7.1-11.el7rhgs.x86_64 glusterfs-libs-3.7.1-11.el7rhgs.x86_64 glusterfs-client-xlators-3.7.1-11.el7rhgs.x86_64 glusterfs-server-3.7.1-11.el7rhgs.x86_64 glusterfs-rdma-3.7.1-11.el7rhgs.x86_64 glusterfs-3.7.1-11.el7rhgs.x86_64 glusterfs-fuse-3.7.1-11.el7rhgs.x86_64 glusterfs-geo-replication-3.7.1-11.el7rhgs.x86_64 [root@knightandday ~]# gstatus --version gstatus 0.64 How reproducible: 100% Steps to Reproduce: 1. Create 6*2 distribute replicate volume 2. Mount volume as FUSE/NFS mount on clients 3. check gstatus -b Actual results: [root@knightandday ~]# gstatus -b [root@darkbeauty ~]# gstatus -b updateSelfHeal.Unable to apply self heal stats due to knightandday:/rhs/brick1/b1 not matching existing brick objects, and can not continue. Expected results: Command should succeed without error and give self-heal state Additional info:
This was due to a stray Try, Except block. After the fix: [root@rhs-1 gstatus]# ./gstatus.py -ba -t 1200 Product: RHGS vserver3.1 Capacity: 398.00 GiB(raw bricks) Status: HEALTHY 993.00 MiB(raw used) Glusterfs: 3.7.1 199.00 GiB(usable from volumes) OverCommit: No Snapshots: 0 Nodes : 4/ 4 Volumes: 1 Up Self Heal : 4/ 4 0 Up(Degraded) Bricks : 4/ 4 0 Up(Partial) Connections : 0/ 0 0 Down Volume Information glustervol UP - 4/4 bricks up - Distributed-Replicate Capacity: (0% used) 497.00 MiB/199.00 GiB (used/total) Snapshots: 0 Self Heal: 4/ 4 Heal backlog of 4005 files Tasks Active: None Protocols: glusterfs:on NFS:off SMB:off Gluster Connectivty: 0 hosts, 0 tcp connections Status Messages - Cluster is HEALTHY, all_bricks checks successful
The below patch fixes: https://github.com/sachidanandaurs/gstatus/commit/4965c420b708e2b8f5e0458fa51d5f8e5ba363ac
Bug verified on build glusterfs-3.7.1-14.el7rhgs.x86_64 [root@rhs-client46 ~]# gstatus -abt 140 Product: RHGS Server v3.1 Capacity: 6.30 TiB(raw bricks) Status: HEALTHY(3) 171.00 MiB(raw used) Glusterfs: 3.7.1 2.70 TiB(usable from volumes) OverCommit: No Snapshots: 0 Nodes : 4/ 4 Volumes: 0 Up Self Heal : 4/ 4 1 Up(Degraded) Bricks : 2/ 4 0 Up(Partial) Connections : 5/ 112 0 Down Volume Information testvol UP(DEGRADED) - 2/4 bricks up - Distributed-Replicate Capacity: (0% used) 96.00 MiB/2.70 TiB (used/total) Snapshots: 0 Self Heal: 4/ 4 Heal backlog of 1238 files Tasks Active: None Protocols: glusterfs:on NFS:on SMB:on Gluster Connectivty: 5 hosts, 112 tcp connections Status Messages - Cluster is HEALTHY - Brick 10.70.36.70:/rhs/brick1/b001 in volume 'testvol' is down/unavailable - Brick 10.70.36.46:/rhs/brick1/b003 in volume 'testvol' is down/unavailable - INFO -> Not all bricks are online, so capacity provided is NOT accurate
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html