Bug 865396
| Summary: | "gluster volume status <volume_name>" command execution outputs the port of brick which is online as "N/A" | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | spandura |
| Component: | glusterd | Assignee: | Anand Nekkunti <anekkunt> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | spandura |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | rhgs-3.0 | CC: | amukherj, anekkunt, avasudev, bkunal, grajaiya, nsathyan, pkarampu, poelstra, racpatel, rhs-bugs, rwheeler, sasundar, spandura, vagarwal, vbellur |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-04-16 14:31:13 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
spandura
2012-10-11 10:37:38 UTC
sosreport taken on brick1:- http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/sosreport-rhs-client6.865396-20121011160938-d6ab.tar.xz sosreport taken on brick2:-http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/sosreport-qa@redhat.com.865396-20121011160839-435d.tar.xz happening because of bug 865693 Not specific to 2.0+. Removing tag. Steps to re-create the issue: 1) gluster volume create vol hostA:/brickA hostB:/brickB 2) gluster volume start vol 3) poweroff hostB 4) gluster volume set vol client-log-level DEBUG - on hostA 5) poweron hostB Reason for the issue: when gluster volume start in step-2 happens the corresponding brick's store is updated with the port-information which is going to be used for future restarts of the brick. Because hostB is powered off, when step-4 happens version of the volume on hostA is > version on hostB, so when glusterd comes up in hostB the complete store of the volume is updated to make it equal to the one in hostA. so the brickB's port information is updated with 0 this results to the volume status showing that the brick is online but the port information is N/A. [root@kernel-compile-2 ~]# gluster volume status Status of volume: vol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 192.168.122.1:/gfs/vol 24009 Y 1075 Brick 192.168.122.162:/gfs/vol N/A Y 1024 NFS Server on localhost 38467 Y 1050 NFS Server on 192.168.122.1 38467 Y 1510 [root@kernel-compile-2 ~]# Not in particular to RHEV-RHS, hence updating summary. This issue still occurs on "glusterfs 3.6.0.22 built on Jun 23 2014 10:33:07" This issue still occurs with RHS 3.0.3 nightly builds. Recently test the same with glusterfs-3.6.0.34-1.el6rhs. There is no functional loss, glusterfsd ( brick processes ) are running, but 'gluster volume status' shown the brick's port as N/A This is a long standing bug and can we get enough attention atleast for future releases. ( RHS 3.1 ? ) When this particular node that went down and came up, was managed using RHSC or RHEVM, the brick process on that node with its port marked as N/A was marked as DOWN. This is more like misleading the admin as the brick processes are still running on that node. So raising the severity to HIGH in this case It is not reproducible with latest code ... Please check it and close this bug if not reproducible ...... Moving Need info to Bipin Kunal. Hi Bipin, Can you check and let us know if the issue is seen in 3.0.4? Cheers, Arun I was not able to reproduce the issue with 3.0.4 with the reproducer step in C#5. We can close this bug marking "Fixed in 3.0.4". Thanks, Bipin Kunal Closing per the last comment *** Bug 971680 has been marked as a duplicate of this bug. *** |