Bug 1313370
| Summary: | No xml output on gluster volume heal info command with --xml | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sahina Bose <sabose> |
| Component: | replicate | Assignee: | hari gowtham <hgowtham> |
| Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Severity: | low | Docs Contact: | |
| Priority: | high | ||
| Version: | rhgs-3.1 | CC: | nchilaka, nlevinki, pkarampu, rcyriac, rhinduja, rhs-bugs, sankarshan, storage-qa-internal |
| Target Milestone: | --- | Keywords: | Triaged, ZStream |
| Target Release: | RHGS 3.1.3 | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.7.9-4 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1063506 | Environment: | |
| Last Closed: | 2016-06-23 05:10:00 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1063506, 1331287, 1334074 | ||
| Bug Blocks: | 1205641, 1258386, 1299184 | ||
|
Description
Sahina Bose
2016-03-01 13:41:38 UTC
Need this for introducing heal monitoring in oVirt engine for 4.0 validated on 3.7.9-4 and xml output works:
[root@nchilaka-node1 ~]# gluster volume heal vol info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<healInfo>
<bricks/>
</healInfo>
<opRet>-1</opRet>
<opErrno>2</opErrno>
<opErrstr>Volume vol does not exist</opErrstr>
</cliOutput>
[root@nchilaka-node1 ~]# gluster volume heal info --xml
Launching heal operation to perform index self heal on volume info has been unsuccessful on bricks that are down. Please check if all brick processes are running.
[root@nchilaka-node1 ~]# gluster volume heal chevro2x2 info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<healInfo>
<bricks>
<brick hostUuid="f710c68e-b249-43a2-930c-4c82c1ff73c9">
<name>10.70.42.204:/rhs/brick1/chevro2x2</name>
<status>Connected</status>
<numberOfEntries>0</numberOfEntries>
</brick>
<brick hostUuid="51bc4ded-1ebb-4ec3-b5a3-2b5de0b025d1">
<name>10.70.42.123:/rhs/brick1/chevro2x2</name>
<status>Connected</status>
<numberOfEntries>0</numberOfEntries>
</brick>
<brick hostUuid="f710c68e-b249-43a2-930c-4c82c1ff73c9">
<name>10.70.42.204:/rhs/brick2/chevro2x2</name>
<status>Connected</status>
<numberOfEntries>0</numberOfEntries>
</brick>
<brick hostUuid="51bc4ded-1ebb-4ec3-b5a3-2b5de0b025d1">
<name>10.70.42.123:/rhs/brick2/chevro2x2</name>
<status>Connected</status>
<numberOfEntries>0</numberOfEntries>
</brick>
</bricks>
</healInfo>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
</cliOutput>
[root@nchilaka-node1 ~]# gluster volume info chevro2x2
r
Volume Name: chevro2x2
Type: Distributed-Replicate
Volume ID: 522fd415-ea8e-42f9-bb30-b7952172d78e
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.204:/rhs/brick1/chevro2x2
Brick2: 10.70.42.123:/rhs/brick1/chevro2x2
Brick3: 10.70.42.204:/rhs/brick2/chevro2x2
Brick4: 10.70.42.123:/rhs/brick2/chevro2x2
Options Reconfigured:
cluster.shd-max-threads: 4
performance.readdir-ahead: on
[root@nchilaka-node1 ~]# rpm -qa|grep gluster
python-gluster-3.7.9-3.el6rhs.noarch
glusterfs-api-3.7.9-4.el6rhs.x86_64
glusterfs-devel-3.7.9-4.el6rhs.x86_64
glusterfs-debuginfo-3.7.9-4.el6rhs.x86_64
glusterfs-libs-3.7.9-4.el6rhs.x86_64
glusterfs-client-xlators-3.7.9-4.el6rhs.x86_64
glusterfs-fuse-3.7.9-4.el6rhs.x86_64
glusterfs-server-3.7.9-4.el6rhs.x86_64
glusterfs-api-devel-3.7.9-4.el6rhs.x86_64
glusterfs-rdma-3.7.9-4.el6rhs.x86_64
glusterfs-3.7.9-4.el6rhs.x86_64
glusterfs-cli-3.7.9-4.el6rhs.x86_64
glusterfs-geo-replication-3.7.9-4.el6rhs.x86_64
hence moving to verified
checked even for split brain and it works
[root@nchilaka-node1 ~]# gluster volume heal chevro2x2 info split-brain
Brick 10.70.42.204:/rhs/brick1/chevro2x2
Status: Connected
Number of entries in split-brain: 0
Brick 10.70.42.123:/rhs/brick1/chevro2x2
Status: Connected
Number of entries in split-brain: 0
Brick 10.70.42.204:/rhs/brick2/chevro2x2
Status: Connected
Number of entries in split-brain: 0
Brick 10.70.42.123:/rhs/brick2/chevro2x2
Status: Connected
Number of entries in split-brain: 0
[root@nchilaka-node1 ~]# gluster volume heal chevro2x2 info split-brain --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<healInfo>
<bricks>
<brick hostUuid="f710c68e-b249-43a2-930c-4c82c1ff73c9">
<name>10.70.42.204:/rhs/brick1/chevro2x2</name>
<status>Connected</status>
<numberOfEntries>0</numberOfEntries>
</brick>
<brick hostUuid="51bc4ded-1ebb-4ec3-b5a3-2b5de0b025d1">
<name>10.70.42.123:/rhs/brick1/chevro2x2</name>
<status>Connected</status>
<numberOfEntries>0</numberOfEntries>
</brick>
<brick hostUuid="f710c68e-b249-43a2-930c-4c82c1ff73c9">
<name>10.70.42.204:/rhs/brick2/chevro2x2</name>
<status>Connected</status>
<numberOfEntries>0</numberOfEntries>
</brick>
<brick hostUuid="51bc4ded-1ebb-4ec3-b5a3-2b5de0b025d1">
<name>10.70.42.123:/rhs/brick2/chevro2x2</name>
<status>Connected</status>
<numberOfEntries>0</numberOfEntries>
</brick>
</bricks>
</healInfo>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
</cliOutput>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |