Bug 1313370

Summary: No xml output on gluster volume heal info command with --xml
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Sahina Bose <sabose>
Component: replicateAssignee: hari gowtham <hgowtham>
Status: CLOSED ERRATA QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: low Docs Contact:
Priority: high    
Version: rhgs-3.1CC: nchilaka, nlevinki, pkarampu, rcyriac, rhinduja, rhs-bugs, sankarshan, storage-qa-internal
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: RHGS 3.1.3   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.7.9-4 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1063506 Environment:
Last Closed: 2016-06-23 05:10:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1063506, 1331287, 1334074    
Bug Blocks: 1205641, 1258386, 1299184    

Description Sahina Bose 2016-03-01 13:41:38 UTC
+++ This bug was initially created as a clone of Bug #1063506 +++

Description of problem:

"gluster volume heal vol info --xml" does not output XML info as i.e. "gluster volume info --xml" does.

Version-Release number of selected component (if applicable):

Tested on 3.4.2 installed on ubuntu 13.10 from semiosis PPA. First observed in 3.4.1

How reproducible:

100%

Steps to Reproduce:
1. Setup a replicate volume
2. Type "gluster volume heal vol info --xml"

Actual results:

Same as without --xml

Expected results:

Should parse into tagged xml for use with APIs etc

Additional info:

Here is a sample

root@testrdp:~# gluster volume heal test1 info --xml
Gathering Heal info on volume test1 has been successful

Brick 10.0.1.152:/mnt/gogo
Number of entries: 0

Brick 10.0.1.205:/mnt/gogo
Number of entries: 0
root@testrdp:~#

root@testrdp:~# gluster volume info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volInfo>
<volumes>
<volume>
<name>test1</name>
<id>9dc0c864-84b2-4c6e-8cd9-39e1af272ff3</id>
<status>1</status>
<statusStr>Started</statusStr>
<brickCount>2</brickCount>
<distCount>2</distCount>
<stripeCount>1</stripeCount>
<replicaCount>2</replicaCount>
<type>2</type>
<typeStr>Replicate</typeStr>
<transport>0</transport>
<bricks>
<brick>10.0.1.152:/mnt/gogo</brick>
<brick>10.0.1.205:/mnt/gogo</brick>
</bricks>
<optCount>0</optCount>
<options/>
</volume>
<count>1</count>
</volumes>
</volInfo>
</cliOutput>
root@testrdp:~#

root@testrdp:~# gluster --version
glusterfs 3.4.2 built on Feb 4 2014 23:08:03
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
root@testrdp:~#

--- Additional comment from  on 2014-09-30 10:55:23 EDT ---

FYI: I have upgraded to 3.5.2 and this is still an issue. However, since false positive healing on files with lots of IO is fixed in 3.5.2 this issue is not as big a deal to me. 

Still though, I can imagine someone wanting to use xml output from the heal info command.

--- Additional comment from Niels de Vos on 2015-05-17 17:59:31 EDT ---

GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

--- Additional comment from Sahina Bose on 2015-06-17 06:32:10 EDT ---

We need xml output for volume self heal info, to monitor the self-heal activity of a volume from oVirt.

Changing the version to 3.7.

Comment 1 Sahina Bose 2016-03-04 07:42:35 UTC
Need this for introducing heal monitoring in oVirt engine for 4.0

Comment 5 Nag Pavan Chilakam 2016-05-11 12:11:14 UTC
validated on 3.7.9-4 and xml output works:
[root@nchilaka-node1 ~]# gluster volume heal vol info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <healInfo>
    <bricks/>
  </healInfo>
  <opRet>-1</opRet>
  <opErrno>2</opErrno>
  <opErrstr>Volume vol does not exist</opErrstr>
</cliOutput>
[root@nchilaka-node1 ~]# gluster volume heal  info --xml
Launching heal operation to perform index self heal on volume info has been unsuccessful on bricks that are down. Please check if all brick processes are running.
[root@nchilaka-node1 ~]# gluster volume heal chevro2x2 info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <healInfo>
    <bricks>
      <brick hostUuid="f710c68e-b249-43a2-930c-4c82c1ff73c9">
        <name>10.70.42.204:/rhs/brick1/chevro2x2</name>
        <status>Connected</status>
        <numberOfEntries>0</numberOfEntries>
      </brick>
      <brick hostUuid="51bc4ded-1ebb-4ec3-b5a3-2b5de0b025d1">
        <name>10.70.42.123:/rhs/brick1/chevro2x2</name>
        <status>Connected</status>
        <numberOfEntries>0</numberOfEntries>
      </brick>
      <brick hostUuid="f710c68e-b249-43a2-930c-4c82c1ff73c9">
        <name>10.70.42.204:/rhs/brick2/chevro2x2</name>
        <status>Connected</status>
        <numberOfEntries>0</numberOfEntries>
      </brick>
      <brick hostUuid="51bc4ded-1ebb-4ec3-b5a3-2b5de0b025d1">
        <name>10.70.42.123:/rhs/brick2/chevro2x2</name>
        <status>Connected</status>
        <numberOfEntries>0</numberOfEntries>
      </brick>
    </bricks>
  </healInfo>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
</cliOutput>


[root@nchilaka-node1 ~]# gluster volume  info chevro2x2
r 
Volume Name: chevro2x2
Type: Distributed-Replicate
Volume ID: 522fd415-ea8e-42f9-bb30-b7952172d78e
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.204:/rhs/brick1/chevro2x2
Brick2: 10.70.42.123:/rhs/brick1/chevro2x2
Brick3: 10.70.42.204:/rhs/brick2/chevro2x2
Brick4: 10.70.42.123:/rhs/brick2/chevro2x2
Options Reconfigured:
cluster.shd-max-threads: 4
performance.readdir-ahead: on
[root@nchilaka-node1 ~]# rpm -qa|grep gluster
python-gluster-3.7.9-3.el6rhs.noarch
glusterfs-api-3.7.9-4.el6rhs.x86_64
glusterfs-devel-3.7.9-4.el6rhs.x86_64
glusterfs-debuginfo-3.7.9-4.el6rhs.x86_64
glusterfs-libs-3.7.9-4.el6rhs.x86_64
glusterfs-client-xlators-3.7.9-4.el6rhs.x86_64
glusterfs-fuse-3.7.9-4.el6rhs.x86_64
glusterfs-server-3.7.9-4.el6rhs.x86_64
glusterfs-api-devel-3.7.9-4.el6rhs.x86_64
glusterfs-rdma-3.7.9-4.el6rhs.x86_64
glusterfs-3.7.9-4.el6rhs.x86_64
glusterfs-cli-3.7.9-4.el6rhs.x86_64
glusterfs-geo-replication-3.7.9-4.el6rhs.x86_64


hence moving to verified

Comment 6 Nag Pavan Chilakam 2016-05-11 12:24:12 UTC
checked even for split brain and it works

[root@nchilaka-node1 ~]# gluster volume heal chevro2x2 info split-brain
Brick 10.70.42.204:/rhs/brick1/chevro2x2
Status: Connected
Number of entries in split-brain: 0

Brick 10.70.42.123:/rhs/brick1/chevro2x2
Status: Connected
Number of entries in split-brain: 0

Brick 10.70.42.204:/rhs/brick2/chevro2x2
Status: Connected
Number of entries in split-brain: 0

Brick 10.70.42.123:/rhs/brick2/chevro2x2
Status: Connected
Number of entries in split-brain: 0

[root@nchilaka-node1 ~]# gluster volume heal chevro2x2 info split-brain --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <healInfo>
    <bricks>
      <brick hostUuid="f710c68e-b249-43a2-930c-4c82c1ff73c9">
        <name>10.70.42.204:/rhs/brick1/chevro2x2</name>
        <status>Connected</status>
        <numberOfEntries>0</numberOfEntries>
      </brick>
      <brick hostUuid="51bc4ded-1ebb-4ec3-b5a3-2b5de0b025d1">
        <name>10.70.42.123:/rhs/brick1/chevro2x2</name>
        <status>Connected</status>
        <numberOfEntries>0</numberOfEntries>
      </brick>
      <brick hostUuid="f710c68e-b249-43a2-930c-4c82c1ff73c9">
        <name>10.70.42.204:/rhs/brick2/chevro2x2</name>
        <status>Connected</status>
        <numberOfEntries>0</numberOfEntries>
      </brick>
      <brick hostUuid="51bc4ded-1ebb-4ec3-b5a3-2b5de0b025d1">
        <name>10.70.42.123:/rhs/brick2/chevro2x2</name>
        <status>Connected</status>
        <numberOfEntries>0</numberOfEntries>
      </brick>
    </bricks>
  </healInfo>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
</cliOutput>

Comment 8 errata-xmlrpc 2016-06-23 05:10:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240