Bug 1333705

Summary: gluster volume heal info "healed" and "heal-failed" showing wrong information
Product: Red Hat Gluster Storage Reporter: nchilaka <nchilaka>
Component: replicateAssignee: Mohit Agrawal <moagrawa>
Status: CLOSED ERRATA QA Contact: Vijay Avuthu <vavuthu>
Severity: low Docs Contact:
Priority: high    
Version: rhgs-3.1CC: dario.vieli, dgallowa, moagrawa, pladd, rhinduja, rhs-bugs, sheggodu
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: rebase
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1388509 1500658 1500660 1500662 (view as bug list) Environment:
Last Closed: 2018-09-04 06:27:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1331340    
Bug Blocks: 1388509, 1452915, 1500658, 1500660, 1500662, 1503134    

Description nchilaka 2016-05-06 07:51:24 UTC
Description of problem:
When we issue below commands on an afr volume to see heal information
gluster v heal <vname> info heal-failed
gluster v heal <vname> info healed
it must throw the write information.
But both the outputs are misleading as below:
"Gathering list of heal failed entries on volume olia has been unsuccessful on bricks that are down. Please check if all brick processes are running."

This could be a case of regression due to 1294612 - Self heal command gives error "Launching heal operation to perform index self heal on volume vol0 has been unsuccessful"

However, given that healed and heal-failed commands could be deprecated soon, hence moved the above mentioned bug to verified.

However raising this for tracking purpose

Version-Release number of selected component (if applicable):
[root@dhcp35-191 ~]# rpm -qa|grep gluster

there is already a bug on upstream,1331340 - heal-info heal-failed shows wrong output when all the bricks are online, but raising a seperate one downstream, as we hit it downstream too(could have cloned, but for certain reasons tought of raising new one)
You can change status accordingly

Comment 2 nchilaka 2016-05-06 07:53:07 UTC
note, that all bricks were online

Comment 4 Ravishankar N 2017-05-21 04:18:09 UTC
Upstream patch by Mohit is pending reviews: https://review.gluster.org/#/c/15724/

Comment 7 Ravishankar N 2018-01-30 03:45:09 UTC
*** Bug 1538779 has been marked as a duplicate of this bug. ***

Comment 8 Vijay Avuthu 2018-02-19 06:52:24 UTC
verified in 3.12.2-4 build with 4 * 3 volume

> "info healed" and "info heal-failed" has been deprecated in 3.4 and the same has been verified

# gluster vol heal 43 info healed

volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [summary | split-brain] |split-brain {bigger-file <FILE> | latest-mtime <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]} |granular-entry-heal {enable | disable}]

# gluster vol heal 43 info heal-failed

volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [summary | split-brain] |split-brain {bigger-file <FILE> | latest-mtime <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]} |granular-entry-heal {enable | disable}]

# gluster vol help | grep -i heal
volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [summary | split-brain] |split-brain {bigger-file <FILE> | latest-mtime <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]} |granular-entry-heal {enable | disable}] - self-heal commands on volume specified by <VOLNAME>

Changing status to Verified

Comment 9 David Galloway 2018-08-27 13:45:18 UTC

I randomly hit this bug on 08-25-2018 and it has started causing my RHV VMs to pause due to unknown storage errors.

Comment 11 errata-xmlrpc 2018-09-04 06:27:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.