Bug 1260966 - gstatus: Status field gives irreverent error code when nodes are down in cluster
gstatus: Status field gives irreverent error code when nodes are down in cluster
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gstatus (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-08 06:11 EDT by Anil Shah
Modified: 2015-09-09 11:56 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-09-09 11:56:17 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anil Shah 2015-09-08 06:11:19 EDT
Description of problem:

When storage node are down in cluster, status field irreverent error code against status:UNHEALTHY  which keep changing after each node failure in cluster

Version-Release number of selected component (if applicable):

[root@darkknight ~]# rpm -qa  | grep glusterfs
glusterfs-libs-3.7.1-14.el7rhgs.x86_64
glusterfs-fuse-3.7.1-14.el7rhgs.x86_64
glusterfs-3.7.1-14.el7rhgs.x86_64
glusterfs-api-3.7.1-14.el7rhgs.x86_64
glusterfs-cli-3.7.1-14.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64
glusterfs-server-3.7.1-14.el7rhgs.x86_64

[root@darkknight ~]# gstatus --version
gstatus 0.65



How reproducible:

100%

Steps to Reproduce:
1. Create 2*2 Distribute replicate volume 
2. Mount volume as FUSE/NFS mount on client
3. Run gstatus -a
4. Bring down one of the Storage Node in cluster
5. Run gstatus -a
6. Bring down  other storage Node in cluster
7. Run gstatus -a

Actual results:

Status field give irrelevant error code against status field which gets changed after node failure

[root@darkknight ~]# gstatus -a
 
     Product: Community          Capacity: 112.00 GiB(raw bricks)         
      Status: UNHEALTHY(5)                   6.00 GiB(raw used)
   Glusterfs: 3.7.1                         77.00 GiB(usable from volumes)

Expected results:



Additional info:
Comment 2 Sachidananda Urs 2015-09-09 11:56:17 EDT
Anil my bad, I was not aware of the exact details when we discussed this bug.

It is not error code that is printed in the `Status:' field. It is the number
of messages.

I'm closing this as not a bug.

Note You need to log in before you can comment on or make changes to this bug.