Bug 1004218 - wrong xml output in gluster volume status all --xml when a volume is down
Summary: wrong xml output in gluster volume status all --xml when a volume is down
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1002403
TreeView+ depends on / blocked
 
Reported: 2013-09-04 08:38 UTC by Kaushal
Modified: 2014-04-17 11:47 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1002403
Environment:
Last Closed: 2014-04-17 11:47:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kaushal 2013-09-04 08:38:16 UTC
+++ This bug was initially created as a clone of Bug #1002403 +++

Description of problem:
`gluster volume status all --xml` returns wrong xml output when a gluster volume is down.

How reproducible:
Run `gluster volume status all --xml` with all volumes in UP state and run again with any one volume is DOWN.

Steps to Reproduce:
1. With all volumes in UP state, run `gluster volume status all --xml` and note the CLI output
2. With any one volume in DOWN state and run `gluster volume status all --aml` and note CLI output

Actual results: (Example only)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>-1</opRet>
  <opErrno>0</opErrno>
  <opErrstr>Volume v1 is not started</opErrstr>
  <cliOp>volStatus</cliOp>
  <output>Volume v1 is not started</output>
</cliOutput>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Expected results: (Example)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Comment 1 Anand Avati 2013-09-04 08:46:17 UTC
REVIEW: http://review.gluster.org/5773 (cli: Fix 'status all' xml output when volumes are not started) posted (#1) for review on master by Kaushal M (kaushal)

Comment 2 Anand Avati 2013-09-11 16:29:30 UTC
COMMIT: http://review.gluster.org/5773 committed in master by Vijay Bellur (vbellur) 
------
commit 7d9bc0d21408c31651a65a6ec0e67c3b8acd0fde
Author: Kaushal M <kaushal>
Date:   Wed Sep 4 13:06:57 2013 +0530

    cli: Fix 'status all' xml output when volumes are not started
    
    CLI now only outputs one XML document for 'status all' only containing
    those volumes which are started.
    
    BUG: 1004218
    Change-Id: Id4130fe59b3b74475d8bd1cc8134ac59a28f1b7e
    Signed-off-by: Kaushal M <kaushal>
    Reviewed-on: http://review.gluster.org/5773
    Reviewed-by: Vijay Bellur <vbellur>
    Tested-by: Gluster Build System <jenkins.com>

Comment 3 Anand Avati 2013-09-19 10:43:38 UTC
REVIEW: http://review.gluster.org/5970 (cli: Fix 'status all' xml output when volumes are not started) posted (#1) for review on release-3.4 by Kaushal M (kaushal)

Comment 4 Anand Avati 2013-09-19 17:30:29 UTC
COMMIT: http://review.gluster.org/5970 committed in release-3.4 by Vijay Bellur (vbellur) 
------
commit ac2f281ad3105236b024550bac48395d513260ec
Author: Kaushal M <kaushal>
Date:   Wed Sep 4 13:06:57 2013 +0530

    cli: Fix 'status all' xml output when volumes are not started
    
     Backport of 7d9bc0d21408c31651a65a6ec0e67c3b8acd0fde from master
    
    CLI now only outputs one XML document for 'status all' only containing
    those volumes which are started.
    
    BUG: 1004218
    Change-Id: I119ac40282380886b46a09fd9a19d35115fd869d
    Signed-off-by: Kaushal M <kaushal>
    Reviewed-on: http://review.gluster.org/5970
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 5 Niels de Vos 2014-04-17 11:47:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.