Bug 1002403

Summary: wrong xml output in gluster volume status all --xml when a volume is down
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Aravinda VK <avishwan>
Component: glusterfsAssignee: Kaushal <kaushal>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact: Kaushal <kaushal>
Priority: medium    
Version: unspecifiedCC: grajaiya, hamiller, kaushal, kparthas, psriniva, rhs-bugs, vbellur, vraman
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 2.1.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.4.0.44.1u2rhs Doc Type: Bug Fix
Doc Text:
Previously, the XML output for "volume status all" command produced multiple XML documents for volumes that were online and offline, one XML document for all the volumes that were online and an XML document each for every volume which was offline. With this fix, the code ignores the offline volumes when producing the XML output for 'volume status all' command.
Story Points: ---
Clone Of:
: 1004218 (view as bug list) Environment:
Last Closed: 2014-02-25 07:36:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1004218    
Bug Blocks:    

Description Aravinda VK 2013-08-29 06:33:42 UTC
Description of problem:
`gluster volume status all --xml` returns wrong xml output when a gluster volume is down.

How reproducible:
Run `gluster volume status all --xml` with all volumes in UP state and run again with any one volume is DOWN.

Steps to Reproduce:
1. With all volumes in UP state, run `gluster volume status all --xml` and note the CLI output
2. With any one volume in DOWN state and run `gluster volume status all --aml` and note CLI output

Actual results: (Example only)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>-1</opRet>
  <opErrno>0</opErrno>
  <opErrstr>Volume v1 is not started</opErrstr>
  <cliOp>volStatus</cliOp>
  <output>Volume v1 is not started</output>
</cliOutput>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Expected results: (Example)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr>(null)</opErrstr>
  <volStatus>
    <volumes>
      <volume>
        <volName>dv1</volName>
        <nodeCount>4</nodeCount>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb1</path>
          <status>1</status>
          <port>49156</port>
          <pid>11341</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb2</path>
          <status>1</status>
          <port>49157</port>
          <pid>11351</pid>
        </node>
        <node>
          <hostname>10.70.42.152</hostname>
          <path>/brcks/dvb3</path>
          <status>0</status>
          <port>N/A</port>
          <pid>27642</pid>
        </node>
        <node>
          <hostname>NFS Server</hostname>
          <path>localhost</path>
          <status>0</status>
          <port>N/A</port>
          <pid>-1</pid>
        </node>
        <tasks>
          <task>
            <type>Rebalance</type>
            <id>c2a76e0b-099d-4879-a53e-f1ea61d67a50</id>
            <status>3</status>
          </task>
        </tasks>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Comment 2 Kaushal 2013-09-04 12:42:18 UTC
Patch posted for review @ https://code.engineering.redhat.com/gerrit/12481

Comment 3 SATHEESARAN 2013-12-20 11:36:20 UTC
Tested with glusterfs-3.4.0.51rhs.el6rhs

0. Created a trusted storage pool of 2 RHSS nodes
(i.e) peer probe <host-ip>

1. Created 3 volumes ( 1 pure replica, 2 distribute volumes )
(i.e) gluster volume create <vol-name> <brick-path>

2. Started the volumes
(i.e) gluster volume start <vol-name>

3. Stop the volume
(i.e) gluster volume stop <vol-name>

4. Get the status of all volumes use xml dump
(i.e) gluster volume status all --xml

The output of the xml dump was consistent, when one of the volume was down.

Comment 4 Pavithra 2014-01-16 06:14:43 UTC
Kaushal,

Can you please verify the doc text for technical accuracy?

Comment 5 Kaushal 2014-01-16 11:00:47 UTC
Did a minor change to the doc text. The remaining text looks fine.

Comment 7 errata-xmlrpc 2014-02-25 07:36:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html