Bug 1500662 - gluster volume heal info "healed" and "heal-failed" showing wrong information
Summary: gluster volume heal info "healed" and "heal-failed" showing wrong information
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.12
Hardware: Unspecified
OS: Unspecified
high
low
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
: 1500658 1500660 (view as bug list)
Depends On: 1331340 1333705
Blocks: 1388509 1452915 1500658 1500660
TreeView+ depends on / blocked
 
Reported: 2017-10-11 09:55 UTC by Mohit Agrawal
Modified: 2017-12-14 10:26 UTC (History)
9 users (show)

Fixed In Version: glusterfs-glusterfs-3.12.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1333705
Environment:
Last Closed: 2017-10-13 12:47:53 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Comment 1 Worker Ant 2017-10-11 10:03:04 UTC
REVIEW: https://review.gluster.org/18487 (cli/afr: gluster volume heal info "healed" command output is not appropriate) posted (#1) for review on release-3.12 by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 2 Worker Ant 2017-10-12 05:37:47 UTC
COMMIT: https://review.gluster.org/18487 committed in release-3.12 by jiffin tony Thottan (jthottan@redhat.com) 
------
commit c49fcf570439e47a5e1224436bbaf3f8dd580105
Author: Mohit Agrawal <moagrawa@redhat.com>
Date:   Tue Oct 25 19:57:02 2016 +0530

    cli/afr: gluster volume heal info "healed" command output is not appropriate
    
    Problem: "gluster volume heal info [healed] [heal-failed]" command
              output on terminal is not appropriate in case of down any volume.
    
    Solution: To make message more appropriate change the condition
              in function "gd_syncop_mgmt_brick_op".
    
    Test : To verify the fix followed below procedure
           1) Create 2*3 distribute replicate volume
           2) set self-heal daemon off
           3) kill two bricks (3, 6)
           4) create some file on mount point
           5) bring brick 3,6 up
           6) kill other two brick (2 and 4)
           7) make self heal daemon on
           8) run "gluster v heal <vol-name>"
    
    Note: After apply the patch options (healed | heal-failed) will deprecate
          from command line.
    
    > BUG: 1388509
    > Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a
    > Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
    > (Cherry pick from commit d1f15cdeb609a1b720a04a502f7a63b2d3922f41)
    
    BUG: 1500662
    Change-Id: I229c320c9caeb2525c76b78b44a53a64b088545a
    Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>

Comment 3 Jiffin 2017-10-13 12:47:53 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.12.2, please open a new bug report.

glusterfs-glusterfs-3.12.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-October/032684.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 4 Mohit Agrawal 2017-10-17 06:38:00 UTC
*** Bug 1500658 has been marked as a duplicate of this bug. ***

Comment 5 Mohit Agrawal 2017-10-17 06:39:06 UTC
*** Bug 1500660 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.