Bug 1254866
| Summary: | gstatus: Running gstatus with -b option gives error | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Anil Shah <ashah> |
| Component: | gstatus | Assignee: | Sachidananda Urs <surs> |
| Status: | CLOSED ERRATA | QA Contact: | Anil Shah <ashah> |
| Severity: | urgent | Docs Contact: | |
| Priority: | high | ||
| Version: | rhgs-3.1 | CC: | asrivast, byarlaga, surs, vagarwal |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 3.1.1 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | gstatus-0.65-1 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-10-05 07:23:54 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1251815 | ||
|
Description
Anil Shah
2015-08-19 06:48:45 UTC
This was due to a stray Try, Except block.
After the fix:
[root@rhs-1 gstatus]# ./gstatus.py -ba -t 1200
Product: RHGS vserver3.1 Capacity: 398.00 GiB(raw bricks)
Status: HEALTHY 993.00 MiB(raw used)
Glusterfs: 3.7.1 199.00 GiB(usable from volumes)
OverCommit: No Snapshots: 0
Nodes : 4/ 4 Volumes: 1 Up
Self Heal : 4/ 4 0 Up(Degraded)
Bricks : 4/ 4 0 Up(Partial)
Connections : 0/ 0 0 Down
Volume Information
glustervol UP - 4/4 bricks up - Distributed-Replicate
Capacity: (0% used) 497.00 MiB/199.00 GiB (used/total)
Snapshots: 0
Self Heal: 4/ 4 Heal backlog of 4005 files
Tasks Active: None
Protocols: glusterfs:on NFS:off SMB:off
Gluster Connectivty: 0 hosts, 0 tcp connections
Status Messages
- Cluster is HEALTHY, all_bricks checks successful
The below patch fixes: https://github.com/sachidanandaurs/gstatus/commit/4965c420b708e2b8f5e0458fa51d5f8e5ba363ac Bug verified on build glusterfs-3.7.1-14.el7rhgs.x86_64
[root@rhs-client46 ~]# gstatus -abt 140
Product: RHGS Server v3.1 Capacity: 6.30 TiB(raw bricks)
Status: HEALTHY(3) 171.00 MiB(raw used)
Glusterfs: 3.7.1 2.70 TiB(usable from volumes)
OverCommit: No Snapshots: 0
Nodes : 4/ 4 Volumes: 0 Up
Self Heal : 4/ 4 1 Up(Degraded)
Bricks : 2/ 4 0 Up(Partial)
Connections : 5/ 112 0 Down
Volume Information
testvol UP(DEGRADED) - 2/4 bricks up - Distributed-Replicate
Capacity: (0% used) 96.00 MiB/2.70 TiB (used/total)
Snapshots: 0
Self Heal: 4/ 4 Heal backlog of 1238 files
Tasks Active: None
Protocols: glusterfs:on NFS:on SMB:on
Gluster Connectivty: 5 hosts, 112 tcp connections
Status Messages
- Cluster is HEALTHY
- Brick 10.70.36.70:/rhs/brick1/b001 in volume 'testvol' is down/unavailable
- Brick 10.70.36.46:/rhs/brick1/b003 in volume 'testvol' is down/unavailable
- INFO -> Not all bricks are online, so capacity provided is NOT accurate
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html |