Bug 1002403 - wrong xml output in gluster volume status all --xml when a volume is down
wrong xml output in gluster volume status all --xml when a volume is down
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
unspecified
Unspecified Unspecified
medium Severity medium
: ---
: RHGS 2.1.2
Assigned To: Kaushal
SATHEESARAN
Kaushal
: ZStream
Depends On: 1004218
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-29 02:33 EDT by Aravinda VK
Modified: 2015-05-13 12:32 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.44.1u2rhs
Doc Type: Bug Fix
Doc Text:
Previously, the XML output for "volume status all" command produced multiple XML documents for volumes that were online and offline, one XML document for all the volumes that were online and an XML document each for every volume which was offline. With this fix, the code ignores the offline volumes when producing the XML output for 'volume status all' command.
Story Points: ---
Clone Of:
: 1004218 (view as bug list)
Environment:
Last Closed: 2014-02-25 02:36:00 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Aravinda VK 2013-08-29 02:33:42 EDT
Description of problem:
`gluster volume status all --xml` returns wrong xml output when a gluster volume is down.

How reproducible:
Run `gluster volume status all --xml` with all volumes in UP state and run again with any one volume is DOWN.

Steps to Reproduce:
1. With all volumes in UP state, run `gluster volume status all --xml` and note the CLI output
2. With any one volume in DOWN state and run `gluster volume status all --aml` and note CLI output

Actual results: (Example only)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>-1</opRet>
  <opErrno>0</opErrno>
  <opErrstr>Volume v1 is not started</opErrstr>
  <cliOp>volStatus</cliOp>
  <output>Volume v1 is not started</output>
</cliOutput>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Expected results: (Example)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>
Comment 2 Kaushal 2013-09-04 08:42:18 EDT
Patch posted for review @ https://code.engineering.redhat.com/gerrit/12481
Comment 3 SATHEESARAN 2013-12-20 06:36:20 EST
Tested with glusterfs-3.4.0.51rhs.el6rhs

0. Created a trusted storage pool of 2 RHSS nodes
(i.e) peer probe <host-ip>

1. Created 3 volumes ( 1 pure replica, 2 distribute volumes )
(i.e) gluster volume create <vol-name> <brick-path>

2. Started the volumes
(i.e) gluster volume start <vol-name>

3. Stop the volume
(i.e) gluster volume stop <vol-name>

4. Get the status of all volumes use xml dump
(i.e) gluster volume status all --xml

The output of the xml dump was consistent, when one of the volume was down.
Comment 4 Pavithra 2014-01-16 01:14:43 EST
Kaushal,

Can you please verify the doc text for technical accuracy?
Comment 5 Kaushal 2014-01-16 06:00:47 EST
Did a minor change to the doc text. The remaining text looks fine.
Comment 7 errata-xmlrpc 2014-02-25 02:36:00 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.