Bug 1254866 - gstatus: Running gstatus with -b option gives error
Summary: gstatus: Running gstatus with -b option gives error
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gstatus
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
: RHGS 3.1.1
Assignee: Sachidananda Urs
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1251815
TreeView+ depends on / blocked
 
Reported: 2015-08-19 06:48 UTC by Anil Shah
Modified: 2015-10-05 07:23 UTC (History)
4 users (show)

Fixed In Version: gstatus-0.65-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-05 07:23:54 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 11:06:22 UTC

Description Anil Shah 2015-08-19 06:48:45 UTC
Description of problem:

when running gstatus command with -b argument, it gives error.

Version-Release number of selected component (if applicable):

[root@knightandday ~]# rpm -qa | grep glusterfs
glusterfs-api-3.7.1-11.el7rhgs.x86_64
glusterfs-cli-3.7.1-11.el7rhgs.x86_64
glusterfs-libs-3.7.1-11.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-11.el7rhgs.x86_64
glusterfs-server-3.7.1-11.el7rhgs.x86_64
glusterfs-rdma-3.7.1-11.el7rhgs.x86_64
glusterfs-3.7.1-11.el7rhgs.x86_64
glusterfs-fuse-3.7.1-11.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-11.el7rhgs.x86_64

[root@knightandday ~]# gstatus --version
gstatus 0.64


How reproducible:

100%

Steps to Reproduce:
1. Create 6*2 distribute replicate volume
2. Mount volume as FUSE/NFS mount on clients
3. check gstatus -b 

Actual results:

[root@knightandday ~]# gstatus -b
 
[root@darkbeauty ~]# gstatus -b
 
updateSelfHeal.Unable to apply self heal stats due to knightandday:/rhs/brick1/b1 not matching existing brick objects, and can not continue.


Expected results:

Command should succeed without error and give self-heal state

Additional info:

Comment 3 Sachidananda Urs 2015-08-28 05:33:05 UTC
This was due to a stray Try, Except block.

After the fix:


[root@rhs-1 gstatus]# ./gstatus.py -ba -t 1200
 
     Product: RHGS vserver3.1    Capacity: 398.00 GiB(raw bricks)
      Status: HEALTHY                      993.00 MiB(raw used)
   Glusterfs: 3.7.1                        199.00 GiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes       :  4/  4           Volumes:   1 Up
   Self Heal   :  4/  4                      0 Up(Degraded)
   Bricks      :  4/  4                      0 Up(Partial)
   Connections :  0/   0                     0 Down

Volume Information
        glustervol       UP - 4/4 bricks up - Distributed-Replicate
                         Capacity: (0% used) 497.00 MiB/199.00 GiB (used/total)
                         Snapshots: 0
                         Self Heal:  4/ 4   Heal backlog of 4005 files
                         Tasks Active: None
                         Protocols: glusterfs:on  NFS:off  SMB:off
                         Gluster Connectivty: 0 hosts, 0 tcp connections


Status Messages
  - Cluster is HEALTHY, all_bricks checks successful

Comment 4 Sachidananda Urs 2015-09-08 12:57:40 UTC
The below patch fixes:

https://github.com/sachidanandaurs/gstatus/commit/4965c420b708e2b8f5e0458fa51d5f8e5ba363ac

Comment 5 Anil Shah 2015-09-10 10:33:07 UTC
Bug verified on build glusterfs-3.7.1-14.el7rhgs.x86_64


[root@rhs-client46 ~]# gstatus -abt 140
 
     Product: RHGS Server v3.1   Capacity:   6.30 TiB(raw bricks)
      Status: HEALTHY(3)                   171.00 MiB(raw used)
   Glusterfs: 3.7.1                          2.70 TiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes       :  4/  4		  Volumes:   0 Up
   Self Heal   :  4/  4		             1 Up(Degraded)
   Bricks      :  2/  4		             0 Up(Partial)
   Connections :  5/ 112                     0 Down

Volume Information
	testvol          UP(DEGRADED) - 2/4 bricks up - Distributed-Replicate
	                 Capacity: (0% used) 96.00 MiB/2.70 TiB (used/total)
	                 Snapshots: 0
	                 Self Heal:  4/ 4   Heal backlog of 1238 files
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:on  SMB:on
	                 Gluster Connectivty: 5 hosts, 112 tcp connections


Status Messages
  - Cluster is HEALTHY
  - Brick 10.70.36.70:/rhs/brick1/b001 in volume 'testvol' is down/unavailable
  - Brick 10.70.36.46:/rhs/brick1/b003 in volume 'testvol' is down/unavailable
  - INFO -> Not all bricks are online, so capacity provided is NOT accurate

Comment 7 errata-xmlrpc 2015-10-05 07:23:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html


Note You need to log in before you can comment on or make changes to this bug.