Bug 1004218 - wrong xml output in gluster volume status all --xml when a volume is down
wrong xml output in gluster volume status all --xml when a volume is down
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
mainline
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Kaushal
:
Depends On:
Blocks: 1002403
  Show dependency treegraph
 
Reported: 2013-09-04 04:38 EDT by Kaushal
Modified: 2014-04-17 07:47 EDT (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1002403
Environment:
Last Closed: 2014-04-17 07:47:20 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Kaushal 2013-09-04 04:38:16 EDT
+++ This bug was initially created as a clone of Bug #1002403 +++

Description of problem:
`gluster volume status all --xml` returns wrong xml output when a gluster volume is down.

How reproducible:
Run `gluster volume status all --xml` with all volumes in UP state and run again with any one volume is DOWN.

Steps to Reproduce:
1. With all volumes in UP state, run `gluster volume status all --xml` and note the CLI output
2. With any one volume in DOWN state and run `gluster volume status all --aml` and note CLI output

Actual results: (Example only)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>-1</opRet>
  <opErrno>0</opErrno>
  <opErrstr>Volume v1 is not started</opErrstr>
  <cliOp>volStatus</cliOp>
  <output>Volume v1 is not started</output>
</cliOutput>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Expected results: (Example)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>
Comment 1 Anand Avati 2013-09-04 04:46:17 EDT
REVIEW: http://review.gluster.org/5773 (cli: Fix 'status all' xml output when volumes are not started) posted (#1) for review on master by Kaushal M (kaushal@redhat.com)
Comment 2 Anand Avati 2013-09-11 12:29:30 EDT
COMMIT: http://review.gluster.org/5773 committed in master by Vijay Bellur (vbellur@redhat.com) 
------
commit 7d9bc0d21408c31651a65a6ec0e67c3b8acd0fde
Author: Kaushal M <kaushal@redhat.com>
Date:   Wed Sep 4 13:06:57 2013 +0530

    cli: Fix 'status all' xml output when volumes are not started
    
    CLI now only outputs one XML document for 'status all' only containing
    those volumes which are started.
    
    BUG: 1004218
    Change-Id: Id4130fe59b3b74475d8bd1cc8134ac59a28f1b7e
    Signed-off-by: Kaushal M <kaushal@redhat.com>
    Reviewed-on: http://review.gluster.org/5773
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
Comment 3 Anand Avati 2013-09-19 06:43:38 EDT
REVIEW: http://review.gluster.org/5970 (cli: Fix 'status all' xml output when volumes are not started) posted (#1) for review on release-3.4 by Kaushal M (kaushal@redhat.com)
Comment 4 Anand Avati 2013-09-19 13:30:29 EDT
COMMIT: http://review.gluster.org/5970 committed in release-3.4 by Vijay Bellur (vbellur@redhat.com) 
------
commit ac2f281ad3105236b024550bac48395d513260ec
Author: Kaushal M <kaushal@redhat.com>
Date:   Wed Sep 4 13:06:57 2013 +0530

    cli: Fix 'status all' xml output when volumes are not started
    
     Backport of 7d9bc0d21408c31651a65a6ec0e67c3b8acd0fde from master
    
    CLI now only outputs one XML document for 'status all' only containing
    those volumes which are started.
    
    BUG: 1004218
    Change-Id: I119ac40282380886b46a09fd9a19d35115fd869d
    Signed-off-by: Kaushal M <kaushal@redhat.com>
    Reviewed-on: http://review.gluster.org/5970
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>
Comment 5 Niels de Vos 2014-04-17 07:47:20 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.