Description of problem: When storage node are down in cluster, status field irreverent error code against status:UNHEALTHY which keep changing after each node failure in cluster Version-Release number of selected component (if applicable): [root@darkknight ~]# rpm -qa | grep glusterfs glusterfs-libs-3.7.1-14.el7rhgs.x86_64 glusterfs-fuse-3.7.1-14.el7rhgs.x86_64 glusterfs-3.7.1-14.el7rhgs.x86_64 glusterfs-api-3.7.1-14.el7rhgs.x86_64 glusterfs-cli-3.7.1-14.el7rhgs.x86_64 glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64 glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64 glusterfs-server-3.7.1-14.el7rhgs.x86_64 [root@darkknight ~]# gstatus --version gstatus 0.65 How reproducible: 100% Steps to Reproduce: 1. Create 2*2 Distribute replicate volume 2. Mount volume as FUSE/NFS mount on client 3. Run gstatus -a 4. Bring down one of the Storage Node in cluster 5. Run gstatus -a 6. Bring down other storage Node in cluster 7. Run gstatus -a Actual results: Status field give irrelevant error code against status field which gets changed after node failure [root@darkknight ~]# gstatus -a Product: Community Capacity: 112.00 GiB(raw bricks) Status: UNHEALTHY(5) 6.00 GiB(raw used) Glusterfs: 3.7.1 77.00 GiB(usable from volumes) Expected results: Additional info:
Anil my bad, I was not aware of the exact details when we discussed this bug. It is not error code that is printed in the `Status:' field. It is the number of messages. I'm closing this as not a bug.