+++ This bug was initially created as a clone of Bug #1045374 +++ Description of problem: ------------------------- The following is from the output of "gluster volume status --xml" command - ------- <node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>63ca3d2f-8c1f-4b84-b797-b4baddab81fb</peerid> <status>1</status> <port>2049</port> <pid>2130</pid> </node> ----- The xml tag <node> is nested as seen above. Version-Release number of selected component (if applicable): glusterfs 3.4.0.50rhs How reproducible: Always Steps to Reproduce: 1. Run the "gluster volume status --xml" command for a distributed-replicate volume. Actual results: The <node> tag is nested as seen above. Expected results: <node> tag should not be nested.
REVIEW: http://review.gluster.org/6571 (cli: Fix xml output for volume status) posted (#1) for review on master by Kaushal M (kaushal)
COMMIT: http://review.gluster.org/6571 committed in master by Vijay Bellur (vbellur) ------ commit 2ba42d07eb967472227eb0a93e4ca2cac7a197b5 Author: Kaushal M <kaushal> Date: Mon Dec 23 14:02:12 2013 +0530 cli: Fix xml output for volume status The XML output for volume status was malformed when one of the nodes is down, leading to outputs like ------- <node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>63ca3d2f-8c1f-4b84-b797-b4baddab81fb</peerid> <status>1</status> <port>2049</port> <pid>2130</pid> </node> ----- This was happening because we were starting the <node> element before determining if node was present, and were not closing it or clearing it when not finding the node in the dict. To fix this, the <node> element is only started once a node has been found in the dict. Change-Id: I6b6205f14b27a69adb95d85db7b48999aa48d400 BUG: 1046020 Signed-off-by: Kaushal M <kaushal> Reviewed-on: http://review.gluster.org/6571 Reviewed-by: Aravinda VK <avishwan> Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users