Bug 1463964 - heal info shows root directory as "Possibly undergoing heal" when heal is pending and heal deamon is disabled
Summary: heal info shows root directory as "Possibly undergoing heal" when heal is pen...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: RHGS 3.4.0
Assignee: Ravishankar N
QA Contact: Vijay Avuthu
URL:
Whiteboard: rebase
Depends On: 1318895 1467268 1467269 1467272
Blocks: 1503134
TreeView+ depends on / blocked
 
Reported: 2017-06-22 07:14 UTC by nchilaka
Modified: 2018-09-19 06:13 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.12.2-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-04 06:32:36 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 06:34:40 UTC

Description nchilaka 2017-06-22 07:14:35 UTC
Description of problem:
========================
When we disable the heal deamon(to test client side heal), and if there are heal pendings, the root dir shows as "Possibly undergoing heal" always until the heal is triggered and clears the entries
I see the problem only with root directory



[root@dhcp35-45 ~]# 
[root@dhcp35-45 ~]# gluster v heal rep2 info
Brick 10.70.35.45:/rhs/brick2/rep2
Status: Connected
Number of entries: 0

Brick 10.70.35.130:/rhs/brick2/rep2
/zen1 
/vex 
/ - Possibly undergoing heal

Status: Connected
Number of entries: 3




Version-Release number of selected component (if applicable):
=====
3.8.4-28

How reproducible:
always

Steps to Reproduce:
1.create a 1x2 volume, disable heal deamon
2.create a zerobyte file f1 under root mount
3.kill b1
4. append data to f1 and create a new file f2
5. bring b1 up
6. check heal info


Actual results:
==========
root will show as possibly undergoing heal, even though there is no heal undergoing

Expected results:
============
should not show wrong info

Comment 7 Vijay Avuthu 2018-03-27 09:56:57 UTC
Update:
========

Build used: glusterfs-server-3.12.2-6.el7rhgs.x86_64

Verified below scenarios for both 1 * 2 and 2 * 3

1. create  volume, disable heal deamon
2. create a zerobyte file f1 under root mount
3. kill b1
4. append data to f1 and create a new file f2
5. bring b1 up
6. check heal info


Didn't see any "Possibly undergoing heal" messages for root dir

# gluster vol heal 12 info
Brick 10.70.35.61:/bricks/brick1/b0
Status: Connected
Number of entries: 0

Brick 10.70.35.174:/bricks/brick1/b1
/f1 
/f2 
/ 
Status: Connected
Number of entries: 3

#


# gluster vol heal 23 info
Brick 10.70.35.61:/bricks/brick0/testvol_distributed-replicated_brick0
Status: Connected
Number of entries: 0

Brick 10.70.35.174:/bricks/brick0/testvol_distributed-replicated_brick1
Status: Connected
Number of entries: 0

Brick 10.70.35.17:/bricks/brick0/testvol_distributed-replicated_brick2
Status: Connected
Number of entries: 0

Brick 10.70.35.163:/bricks/brick0/testvol_distributed-replicated_brick3
Status: Connected
Number of entries: 0

Brick 10.70.35.136:/bricks/brick0/testvol_distributed-replicated_brick4
/f1 
/f2 
/ 
Status: Connected
Number of entries: 3

Brick 10.70.35.214:/bricks/brick0/testvol_distributed-replicated_brick5
/f1 
/f2 
/ 
Status: Connected
Number of entries: 3

# 


Changing status to verified

Comment 8 errata-xmlrpc 2018-09-04 06:32:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.