Bug 1366128 - "heal info --xml" not showing the brick name of offline bricks.
Summary: "heal info --xml" not showing the brick name of offline bricks.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: RHGS 3.2.0
Assignee: Pranith Kumar K
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks: 1351528 1366222 1366489
TreeView+ depends on / blocked
 
Reported: 2016-08-11 06:41 UTC by spandura
Modified: 2017-03-23 05:44 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1366222 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:44:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description spandura 2016-08-11 06:41:24 UTC
Description of problem:
======================
When the bricks are offline, if we want to get the heal info xml output for all the bricks , the offline brick names are not available. Instead we have the message 'information not available' for the 'name' tag.However The "heal info" command output displays the brick names even for the offline brick. It would be good to have the same info in the xml output as well. 


Version-Release number of selected component (if applicable):
==============================================================
glusterfs-3.7.9-10.el7rhgs.x86_64

How reproducible:
==================
1/1

Steps to Reproduce:
=====================
1. Create a replicated volume. Start the volume. Create mount. Create files from mount

2. bring down a brick. Modify the files.

3. Execute "heal <volname> info --xml" 

Actual results:
================
Offline bricks names are not shown in the xml output

Expected results:
==================
bricks names to be shown in xml output as well.

Comment 3 Atin Mukherjee 2016-08-26 11:55:21 UTC
http://review.gluster.org/15156 has made into 3.8.2 which means the fix would be available as part of rebase in rhgs-3.2.0

Comment 4 Atin Mukherjee 2016-09-19 13:05:32 UTC
Upstream mainline : http://review.gluster.org/15146
Upstream 3.8 : http://review.gluster.org/15156

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.

Comment 9 Nag Pavan Chilakam 2016-10-07 11:57:39 UTC
QATP:
====
TC#1: Heal info using --xml must show brick name of even offline bricks
TC#2: heal info when bricks are offline must be consistent across both regular heal info command and --xml heal info command
TC#3: Heal info --xml must start throwing o/p as soon as issued

Comment 10 Nag Pavan Chilakam 2016-10-07 12:03:07 UTC
Validation of QATP:
===================
TC#1 Pass
TC#2 and TC#3 fail
But given that the bug was raised for issue mentioned in TC#1, hence marking as pass while raising new bugs for TC#2 and #3
TC#1: Heal info using --xml must show brick name of even offline bricks ==>PASS



root@dhcp35-179 ~]# gluster v heal afrvol info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <healInfo>
    <bricks>
      <brick hostUuid="-">
        <name>10.70.35.179:/rhs/brick1/afrvol</name>
        <status>Transport endpoint is not connected</status>
        <numberOfEntries>-</numberOfEntries>
      </brick>
      <brick hostUuid="39e568ea-b190-4dae-aca0-0b9d83479438">
        <name>10.70.35.180:/rhs/brick1/afrvol</name>
        <file gfid="00000000-0000-0000-0000-000000000001">/</file>
        <file gfid="f4c970d0-a4f9-4c0a-ad02-0748c20b88e9">/hello1</file>
        <file gfid="296b971c-ecc3-4e79-a1a0-a5b4d315a218">/hello2</file>
        <file gfid="2a6c4c56-6b1f-4775-81cf-6ec6b8e17a77">/hello5</file>
        <file gfid="e28c1aae-d79e-4c76-8564-2b996148f814">/hello6</file>
        <file gfid="5ca405cd-9979-4205-aec9-c918a13aa27f">/hello7</file>
        <file gfid="b41dbd3e-8240-41c6-a47a-4aabff983299">/hello8</file>
        <file gfid="24255730-3c45-4aae-9107-9ab919b78f85">/hello10</file>
        <status>Connected</status>
        <numberOfEntries>8</numberOfEntries>
      </brick>
      <brick hostUuid="-">
        <name>10.70.35.9:/rhs/brick1/afrvol</name>
        <status>Transport endpoint is not connected</status>
        <numberOfEntries>-</numberOfEntries>
      </brick>
      <brick hostUuid="4411ff58-c305-4064-bc67-42538d0cbed5">
        <name>10.70.35.153:/rhs/brick1/afrvol</name>
        <file gfid="00000000-0000-0000-0000-000000000001">/</file>
        <file gfid="ff916223-6810-4df1-8687-bd3b87a443ca">/hello3</file>
        <file gfid="ecc9a62f-8a2f-43b4-9618-b04c7c3e2e25">/hello4</file>
        <file gfid="35fdf539-8e58-43b6-b94e-51a5c78d2294">/hello9</file>
        <status>Connected</status>
        <numberOfEntries>4</numberOfEntries>
      </brick>
    </bricks>
  </healInfo>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
</cliOutput>
[root@dhcp35-179 ~]# 



TC#2: heal info when bricks are offline must be consistent across both regular heal info command and --xml heal info command ==>FAIL (raising bug)
The heal entries just show the file name in a normal heal info command but when we use --xml it shows both filename and gfid

(for the same o/p in tc#1) below is regular heal info command o/p
[root@dhcp35-179 ~]# gluster v heal afrvol info
Brick 10.70.35.179:/rhs/brick1/afrvol
Status: Transport endpoint is not connected
Number of entries: -

Brick 10.70.35.180:/rhs/brick1/afrvol
/ 
/hello1 
/hello2 
/hello5 
/hello6 
/hello7 
/hello8 
/hello10 
Status: Connected
Number of entries: 8

Brick 10.70.35.9:/rhs/brick1/afrvol
Status: Transport endpoint is not connected
Number of entries: -

Brick 10.70.35.153:/rhs/brick1/afrvol
/ 
/hello3 
/hello4 
/hello9 
Status: Connected
Number of entries: 4


TC#3: Heal info --xml must start throwing o/p as soon as issued  ==>FAIL
(raising bug)
This fails ==>in a systemic testing environment where there are large number of entries to be displayed, the o/p is not even started showing after more than 30 minutes also

Comment 11 Nag Pavan Chilakam 2016-10-07 12:20:53 UTC
TC#2 Failure BZ# raised 1382690 - output of heal info and heal info --xml are not consistent (throwing different formats of output )
TC#3 BZ raised 1382686 - heal info --xml when bricks are down in a systemic environment is not displaying anything even after more than 30minutes



test version
[root@dhcp35-191 ~]# rpm -qa|grep gluster
glusterfs-libs-3.8.4-1.el7rhgs.x86_64
glusterfs-fuse-3.8.4-1.el7rhgs.x86_64
glusterfs-debuginfo-3.8.4-1.el7rhgs.x86_64
glusterfs-3.8.4-1.el7rhgs.x86_64
glusterfs-api-3.8.4-1.el7rhgs.x86_64
glusterfs-cli-3.8.4-1.el7rhgs.x86_64
glusterfs-events-3.8.4-1.el7rhgs.x86_64
glusterfs-rdma-3.8.4-1.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-1.el7rhgs.x86_64
glusterfs-server-3.8.4-1.el7rhgs.x86_64
python-gluster-3.8.4-1.el7rhgs.noarch
glusterfs-devel-3.8.4-1.el7rhgs.x86_64
[root@dhcp35-191 ~]#

Comment 13 errata-xmlrpc 2017-03-23 05:44:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.