Bug 1046020 - 'gluster volume status --xml' has issues
Summary: 'gluster volume status --xml' has issues
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1045374 1061211 1117241
TreeView+ depends on / blocked
 
Reported: 2013-12-23 08:43 UTC by Kaushal
Modified: 2014-11-11 08:26 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Clone Of: 1045374
Environment:
Last Closed: 2014-11-11 08:26:03 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kaushal 2013-12-23 08:43:46 UTC
+++ This bug was initially created as a clone of Bug #1045374 +++

Description of problem:
-------------------------

The following is from the output of "gluster volume status --xml" command - 

-------
          <node>
             <node>
               <hostname>NFS Server</hostname>
               <path>localhost</path>
<peerid>63ca3d2f-8c1f-4b84-b797-b4baddab81fb</peerid>
               <status>1</status>
               <port>2049</port>
               <pid>2130</pid>
             </node>
-----

The xml tag <node> is nested as seen above.

Version-Release number of selected component (if applicable):
glusterfs 3.4.0.50rhs

How reproducible:
Always

Steps to Reproduce:
1. Run the "gluster volume status --xml" command for a distributed-replicate volume.

Actual results:
The <node> tag is nested as seen above.

Expected results:
<node> tag should not be nested.

Comment 1 Anand Avati 2013-12-23 09:18:26 UTC
REVIEW: http://review.gluster.org/6571 (cli: Fix xml output for volume status) posted (#1) for review on master by Kaushal M (kaushal)

Comment 2 Anand Avati 2013-12-26 05:34:13 UTC
COMMIT: http://review.gluster.org/6571 committed in master by Vijay Bellur (vbellur) 
------
commit 2ba42d07eb967472227eb0a93e4ca2cac7a197b5
Author: Kaushal M <kaushal>
Date:   Mon Dec 23 14:02:12 2013 +0530

    cli: Fix xml output for volume status
    
    The XML output for volume status was malformed when one of the nodes is
    down, leading to outputs like
    -------
              <node>
                 <node>
                   <hostname>NFS Server</hostname>
                   <path>localhost</path>
                   <peerid>63ca3d2f-8c1f-4b84-b797-b4baddab81fb</peerid>
                   <status>1</status>
                   <port>2049</port>
                   <pid>2130</pid>
                 </node>
    -----
    
    This was happening because we were starting the <node> element before
    determining if node was present, and were not closing it or clearing it
    when not finding the node in the dict.
    
    To fix this, the <node> element is only started once a node has been
    found in the dict.
    
    Change-Id: I6b6205f14b27a69adb95d85db7b48999aa48d400
    BUG: 1046020
    Signed-off-by: Kaushal M <kaushal>
    Reviewed-on: http://review.gluster.org/6571
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 3 Niels de Vos 2014-09-22 12:34:04 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 4 Niels de Vos 2014-11-11 08:26:03 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.