Bug 1463964

Summary: heal info shows root directory as "Possibly undergoing heal" when heal is pending and heal deamon is disabled
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nag Pavan Chilakam <nchilaka>
Component: replicateAssignee: Ravishankar N <ravishankar>
Status: CLOSED ERRATA QA Contact: Vijay Avuthu <vavuthu>
Severity: medium Docs Contact:
Priority: high    
Version: rhgs-3.3CC: amukherj, rhinduja, rhs-bugs, sheggodu, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: rebase
Fixed In Version: glusterfs-3.12.2-1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-04 06:32:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1318895, 1467268, 1467269, 1467272    
Bug Blocks: 1503134    

Description Nag Pavan Chilakam 2017-06-22 07:14:35 UTC
Description of problem:
========================
When we disable the heal deamon(to test client side heal), and if there are heal pendings, the root dir shows as "Possibly undergoing heal" always until the heal is triggered and clears the entries
I see the problem only with root directory



[root@dhcp35-45 ~]# 
[root@dhcp35-45 ~]# gluster v heal rep2 info
Brick 10.70.35.45:/rhs/brick2/rep2
Status: Connected
Number of entries: 0

Brick 10.70.35.130:/rhs/brick2/rep2
/zen1 
/vex 
/ - Possibly undergoing heal

Status: Connected
Number of entries: 3




Version-Release number of selected component (if applicable):
=====
3.8.4-28

How reproducible:
always

Steps to Reproduce:
1.create a 1x2 volume, disable heal deamon
2.create a zerobyte file f1 under root mount
3.kill b1
4. append data to f1 and create a new file f2
5. bring b1 up
6. check heal info


Actual results:
==========
root will show as possibly undergoing heal, even though there is no heal undergoing

Expected results:
============
should not show wrong info

Comment 7 Vijay Avuthu 2018-03-27 09:56:57 UTC
Update:
========

Build used: glusterfs-server-3.12.2-6.el7rhgs.x86_64

Verified below scenarios for both 1 * 2 and 2 * 3

1. create  volume, disable heal deamon
2. create a zerobyte file f1 under root mount
3. kill b1
4. append data to f1 and create a new file f2
5. bring b1 up
6. check heal info


Didn't see any "Possibly undergoing heal" messages for root dir

# gluster vol heal 12 info
Brick 10.70.35.61:/bricks/brick1/b0
Status: Connected
Number of entries: 0

Brick 10.70.35.174:/bricks/brick1/b1
/f1 
/f2 
/ 
Status: Connected
Number of entries: 3

#


# gluster vol heal 23 info
Brick 10.70.35.61:/bricks/brick0/testvol_distributed-replicated_brick0
Status: Connected
Number of entries: 0

Brick 10.70.35.174:/bricks/brick0/testvol_distributed-replicated_brick1
Status: Connected
Number of entries: 0

Brick 10.70.35.17:/bricks/brick0/testvol_distributed-replicated_brick2
Status: Connected
Number of entries: 0

Brick 10.70.35.163:/bricks/brick0/testvol_distributed-replicated_brick3
Status: Connected
Number of entries: 0

Brick 10.70.35.136:/bricks/brick0/testvol_distributed-replicated_brick4
/f1 
/f2 
/ 
Status: Connected
Number of entries: 3

Brick 10.70.35.214:/bricks/brick0/testvol_distributed-replicated_brick5
/f1 
/f2 
/ 
Status: Connected
Number of entries: 3

# 


Changing status to verified

Comment 8 errata-xmlrpc 2018-09-04 06:32:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607