Bug 1028325 - [Gluster-cli] Glusterfs rebalance xml output is null although gluster rebalance status returns rebalance status.
[Gluster-cli] Glusterfs rebalance xml output is null although gluster rebalan...
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
Unspecified Unspecified
high Severity medium
: ---
: RHGS 2.1.2
Assigned To: Aravinda VK
: ZStream
Depends On:
Blocks: 1015045 1015394 1027675 1033035 1036564
  Show dependency treegraph
Reported: 2013-11-08 03:34 EST by anmol babu
Modified: 2015-05-13 12:30 EDT (History)
5 users (show)

See Also:
Fixed In Version: glusterfs-
Doc Type: Bug Fix
Doc Text:
Previously when a node went down, glusterFS CLI would fail to retrieve Rebalance status of all nodes on that cluster. With this update, glusterd service collects information from nodes that are available online and ignore the nodes that are offline. As a result, glusterFS CLI returns an XML output even if one or more nodes in a cluster are offline.
Story Points: ---
Clone Of:
: 1036564 (view as bug list)
Last Closed: 2014-02-25 03:02:20 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description anmol babu 2013-11-08 03:34:14 EST
Description of problem:
Glusterfs rebalance xml output is null although gluster rebalance status returns rebalance status. 

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.Create a distribute volume with atleast 1 brick each from 4 hosts.
2.Start rebalance on the volume.
3.Before rebalance completes, stop glusterd on one of the hosts.
4.Check rebalance status "xml" returned from gluster cli on other hosts.

Actual results:
Displays/Returns nothing.

Expected results:
Should display/return the xml output

Additional info:
This is required by vdsm and hence rhsc.
Comment 2 Dusmant 2013-11-21 01:36:47 EST
This issue is blocking two of the RHSC bugs... It needs to be fixed.
Comment 3 Dusmant 2013-11-21 04:19:04 EST
Asking Aravinda to look at this, because Kaushal is already looking at bunch of other issues.
Comment 4 anmol babu 2013-11-27 02:21:11 EST
Occurs for remove-brick status also.
Comment 5 Kaushal 2013-11-28 01:28:29 EST
This was an regression introduced by a change done to introduce consistent ordering of rebalance status results.

Prior to the change, the index was incremented sequentially on each response to a rebalance status received from a peer. This meant that there were no holes in the indices, and the cli output code was written to handle the indices in this manner.
With the change, each peer is given a consistent index, which results in the indices having holes when one or more of the peers are down. Since the cli output code was not changed to match this, we have the issue observed in this bug. The difference seen between the normal and xml output is caused because the xml output code is written to display all the available information or nothing at all, whereas the normal cli output displays the available information.

I have informed Aravinda of this and he has agreed to make the required cli changes to get the outputs working again.
Comment 6 SATHEESARAN 2013-12-19 23:42:20 EST

Can you provide patch url ?
Comment 7 Aravinda VK 2013-12-19 23:45:48 EST
@Satheesaran, patch link is already updated in External Trackers section. let me know if you need upstream patch url.
Comment 8 SATHEESARAN 2013-12-20 08:07:59 EST
Thanks Aravinda for comment 7

Verified this bug with glusterfs-

Performed the following steps:

1. Created trusted storage pool of 3 RHSS Nodes
(i.e) gluster peer probe <host-ip>

2. Created a plain distribute volume with 1 brick, with 1 brick per RHSS Node
(ie) gluster volume create <vol-name> <server1>:<brick1>

3. Started the volume
(i.e) gluster volume start <vol-name>

4. Fuse mounted the volume on a RHEL 6.5 client with glusterfs-
(i.e) mount.glusterfs <RHSS-IP>:<vol-name> <mount-point>

5. Write some 200 files with 41 MB on the mount point
(i.e) for i in {1..200}; do dd if=/dev/urandom of=file$i bs=4k count=10000;done

6. Add 2 more bricks to the volume ( one brick per RHSS Node )
(i.e) gluster volume add-brick <vol-name> <server2>:<brick2> <server3>:<brick3>

7. Start rebalance on the volume
(i.e) gluster volume start rebalance <vol-name> start

8. While step 7 is in progress, stop glusterd on the third RHSS Node
(i.e) service glusted stop

9. Try to xml dump of gluster volume status from other nodes, where glusterd is UP
(i.e) gluster volume status all --xml

xml dump was successful even when glusterd was down in a node
Comment 9 Pavithra 2014-01-07 04:38:28 EST
Can you please verify the doc text for technical accuracy?
Comment 10 Aravinda VK 2014-01-07 04:54:59 EST
Doc Text looks good to me.
Comment 12 errata-xmlrpc 2014-02-25 03:02:20 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.