Bug 1500662

Summary: gluster volume heal info "healed" and "heal-failed" showing wrong information
Product: [Community] GlusterFS Reporter: Mohit Agrawal <moagrawa>
Component: replicateAssignee: Mohit Agrawal <moagrawa>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: low Docs Contact:
Priority: high    
Version: 3.12CC: asoman, bugs, dario.vieli, moagrawa, nchilaka, ravishankar, rhs-bugs, sheggodu, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-glusterfs-3.12.2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1333705 Environment:
Last Closed: 2017-10-13 12:47:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1331340, 1333705    
Bug Blocks: 1388509, 1452915, 1500658, 1500660    

Comment 1 Worker Ant 2017-10-11 10:03:04 UTC
REVIEW: https://review.gluster.org/18487 (cli/afr: gluster volume heal info "healed" command output is not appropriate) posted (#1) for review on release-3.12 by MOHIT AGRAWAL (moagrawa)

Comment 2 Worker Ant 2017-10-12 05:37:47 UTC
COMMIT: https://review.gluster.org/18487 committed in release-3.12 by jiffin tony Thottan (jthottan) 
------
commit c49fcf570439e47a5e1224436bbaf3f8dd580105
Author: Mohit Agrawal <moagrawa>
Date:   Tue Oct 25 19:57:02 2016 +0530

    cli/afr: gluster volume heal info "healed" command output is not appropriate
    
    Problem: "gluster volume heal info [healed] [heal-failed]" command
              output on terminal is not appropriate in case of down any volume.
    
    Solution: To make message more appropriate change the condition
              in function "gd_syncop_mgmt_brick_op".
    
    Test : To verify the fix followed below procedure
           1) Create 2*3 distribute replicate volume
           2) set self-heal daemon off
           3) kill two bricks (3, 6)
           4) create some file on mount point
           5) bring brick 3,6 up
           6) kill other two brick (2 and 4)
           7) make self heal daemon on
           8) run "gluster v heal <vol-name>"
    
    Note: After apply the patch options (healed | heal-failed) will deprecate
          from command line.
    
    > BUG: 1388509
    > Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a
    > Signed-off-by: Mohit Agrawal <moagrawa>
    > (Cherry pick from commit d1f15cdeb609a1b720a04a502f7a63b2d3922f41)
    
    BUG: 1500662
    Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a
    Signed-off-by: Mohit Agrawal <moagrawa>

Comment 3 Jiffin 2017-10-13 12:47:53 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.12.2, please open a new bug report.

glusterfs-glusterfs-3.12.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-October/032684.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 4 Mohit Agrawal 2017-10-17 06:38:00 UTC
*** Bug 1500658 has been marked as a duplicate of this bug. ***

Comment 5 Mohit Agrawal 2017-10-17 06:39:06 UTC
*** Bug 1500660 has been marked as a duplicate of this bug. ***