Bug 1002403 - wrong xml output in gluster volume status all --xml when a volume is down
Summary: wrong xml output in gluster volume status all --xml when a volume is down
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 2.1.2
Assignee: Kaushal
QA Contact: SATHEESARAN
Kaushal
URL:
Whiteboard:
Depends On: 1004218
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-29 06:33 UTC by Aravinda VK
Modified: 2018-12-04 15:47 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.4.0.44.1u2rhs
Doc Type: Bug Fix
Doc Text:
Previously, the XML output for "volume status all" command produced multiple XML documents for volumes that were online and offline, one XML document for all the volumes that were online and an XML document each for every volume which was offline. With this fix, the code ignores the offline volumes when producing the XML output for 'volume status all' command.
Clone Of:
: 1004218 (view as bug list)
Environment:
Last Closed: 2014-02-25 07:36:00 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:0208 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #2 2014-02-25 12:20:30 UTC

Description Aravinda VK 2013-08-29 06:33:42 UTC
Description of problem:
`gluster volume status all --xml` returns wrong xml output when a gluster volume is down.

How reproducible:
Run `gluster volume status all --xml` with all volumes in UP state and run again with any one volume is DOWN.

Steps to Reproduce:
1. With all volumes in UP state, run `gluster volume status all --xml` and note the CLI output
2. With any one volume in DOWN state and run `gluster volume status all --aml` and note CLI output

Actual results: (Example only)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>-1</opRet>
  <opErrno>0</opErrno>
  <opErrstr>Volume v1 is not started</opErrstr>
  <cliOp>volStatus</cliOp>
  <output>Volume v1 is not started</output>
</cliOutput>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Expected results: (Example)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Comment 2 Kaushal 2013-09-04 12:42:18 UTC
Patch posted for review @ https://code.engineering.redhat.com/gerrit/12481

Comment 3 SATHEESARAN 2013-12-20 11:36:20 UTC
Tested with glusterfs-3.4.0.51rhs.el6rhs

0. Created a trusted storage pool of 2 RHSS nodes
(i.e) peer probe <host-ip>

1. Created 3 volumes ( 1 pure replica, 2 distribute volumes )
(i.e) gluster volume create <vol-name> <brick-path>

2. Started the volumes
(i.e) gluster volume start <vol-name>

3. Stop the volume
(i.e) gluster volume stop <vol-name>

4. Get the status of all volumes use xml dump
(i.e) gluster volume status all --xml

The output of the xml dump was consistent, when one of the volume was down.

Comment 4 Pavithra 2014-01-16 06:14:43 UTC
Kaushal,

Can you please verify the doc text for technical accuracy?

Comment 5 Kaushal 2014-01-16 11:00:47 UTC
Did a minor change to the doc text. The remaining text looks fine.

Comment 7 errata-xmlrpc 2014-02-25 07:36:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html


Note You need to log in before you can comment on or make changes to this bug.