Bug 1028325
Summary: | [Gluster-cli] Glusterfs rebalance xml output is null although gluster rebalance status returns rebalance status. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | anmol babu <anbabu> | |
Component: | glusterfs | Assignee: | Aravinda VK <avishwan> | |
Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> | |
Severity: | medium | Docs Contact: | ||
Priority: | high | |||
Version: | unspecified | CC: | avishwan, dpati, kaushal, psriniva, vbellur | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 2.1.2 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.4.0.47.1u2rhs-1.el6rhs | Doc Type: | Bug Fix | |
Doc Text: |
Previously when a node went down, glusterFS CLI would fail to retrieve Rebalance status of all nodes on that cluster. With this update, glusterd service collects information from nodes that are available online and ignore the nodes that are offline. As a result, glusterFS CLI returns an XML output even if one or more nodes in a cluster are offline.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1036564 (view as bug list) | Environment: | ||
Last Closed: | 2014-02-25 08:02:20 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1015045, 1015394, 1027675, 1033035, 1036564 |
Description
anmol babu
2013-11-08 08:34:14 UTC
This issue is blocking two of the RHSC bugs... It needs to be fixed. Asking Aravinda to look at this, because Kaushal is already looking at bunch of other issues. Occurs for remove-brick status also. This was an regression introduced by a change done to introduce consistent ordering of rebalance status results. Prior to the change, the index was incremented sequentially on each response to a rebalance status received from a peer. This meant that there were no holes in the indices, and the cli output code was written to handle the indices in this manner. With the change, each peer is given a consistent index, which results in the indices having holes when one or more of the peers are down. Since the cli output code was not changed to match this, we have the issue observed in this bug. The difference seen between the normal and xml output is caused because the xml output code is written to display all the available information or nothing at all, whereas the normal cli output displays the available information. I have informed Aravinda of this and he has agreed to make the required cli changes to get the outputs working again. Aravinda, Can you provide patch url ? @Satheesaran, patch link is already updated in External Trackers section. let me know if you need upstream patch url. Thanks Aravinda for comment 7 Verified this bug with glusterfs-3.4.0.51rhs.el6rhs Performed the following steps: 1. Created trusted storage pool of 3 RHSS Nodes (i.e) gluster peer probe <host-ip> 2. Created a plain distribute volume with 1 brick, with 1 brick per RHSS Node (ie) gluster volume create <vol-name> <server1>:<brick1> 3. Started the volume (i.e) gluster volume start <vol-name> 4. Fuse mounted the volume on a RHEL 6.5 client with glusterfs-3.4.0.51rhs.el6_4 (i.e) mount.glusterfs <RHSS-IP>:<vol-name> <mount-point> 5. Write some 200 files with 41 MB on the mount point (i.e) for i in {1..200}; do dd if=/dev/urandom of=file$i bs=4k count=10000;done 6. Add 2 more bricks to the volume ( one brick per RHSS Node ) (i.e) gluster volume add-brick <vol-name> <server2>:<brick2> <server3>:<brick3> 7. Start rebalance on the volume (i.e) gluster volume start rebalance <vol-name> start 8. While step 7 is in progress, stop glusterd on the third RHSS Node (i.e) service glusted stop 9. Try to xml dump of gluster volume status from other nodes, where glusterd is UP (i.e) gluster volume status all --xml xml dump was successful even when glusterd was down in a node Can you please verify the doc text for technical accuracy? Doc Text looks good to me. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |