Bug 1234054 - `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
Summary: `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/e...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1233608 1403840
Blocks: 1223636 1405126 1405130
TreeView+ depends on / blocked
 
Reported: 2015-06-20 16:15 UTC by Ravishankar N
Modified: 2017-03-06 17:20 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.10.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1233608
: 1405126 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:20:01 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2015-06-20 16:15:48 UTC
+++ This bug was initially created as a clone of Bug #1233608 +++

Description of problem:
------------------------
If a file in data/metadata/entry split-brain is attempted to be healed using the `gluster volume heal <vol-name> split-brain' command, the heal fails if the respective data/metadata/entry self-heal volume option is turned off. This should not be the case, as glfsheal should not take these options into consideration.

See below, sample output of the command -

# gluster v heal rep2 split-brain source-brick 10.70.37.134:/rhs/brick6/b1/ /bar
Healing /bar failed: File not in split-brain.
Volume heal failed.

# gluster v heal rep2 split-brain bigger-file /bar                                                                                                   
Healing /bar failed: File not in split-brain.
Volume heal failed.

Volume configuration -

# gluster v info rep2
 
Volume Name: rep2
Type: Replicate
Volume ID: 0bf8fb07-8b09-4be8-94e7-29f4d3d7632f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.70.37.208:/rhs/brick6/b1
Brick2: 10.70.37.134:/rhs/brick6/b1
Options Reconfigured:
cluster.entry-self-heal: off
cluster.data-self-heal: off
cluster.metadata-self-heal: off
cluster.self-heal-daemon: off
features.uss: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
glusterfs-3.7.1-3.el6rhs.x86_64

How reproducible:
------------------
100%

Steps to Reproduce:
--------------------
1. Set the following options on a 1x2 volume -
 
cluster.entry-self-heal: off
cluster.data-self-heal: off
cluster.metadata-self-heal: off
cluster.self-heal-daemon: off

2. Kill one brick of the replica set.
3. From the mount write to an existing file, or perform metadata operations like chmod.
4. Start the volume with force.
5. Kill the other brick in the replica set.
6. Perform data/metadata operations on the same file.
7. Start the volume with force and try to heal the now split-brained file using the above mentioned CLI.

Actual results:
----------------
Heal fails.

Expected results:
------------------
Heal is expected to succeed.

Additional info:

--- Additional comment from Shruti Sampat on 2015-06-19 07:48:07 EDT ---

Heal also fails when trying to resolve split-brain from the client by setting extended attributes, when the data/metadata self-heal options are turned off. With the options turned on, heal works as expected. This needs to be fixed too as part of this BZ.

--- Additional comment from Ravishankar N on 2015-06-20 12:03:24 EDT ---

The bug in the description needs to be fixed. After the initial discussion with Shruti, I was giving some more thought to the expected behaviour for comment #1 It seems to me that if the client side heal options are disabled via volume set, then split-brain healing from mount (via setfattr interface) should also honour that. i.e. it should not heal the file. 

If a particular client wants to override the heal options which are disabled on an entire volume basis, it can always mount the volume with the heal options enabled as fuse mount options. ( --xlator-option *replicate*.data-self-heal=on etc.)

Comment 1 Anand Avati 2015-06-20 16:17:22 UTC
REVIEW: http://review.gluster.org/11333 (glfsheal: Explicitly enable self-heal xlator options) posted (#1) for review on master by Ravishankar N (ravishankar)

Comment 2 Mike McCune 2016-03-28 23:24:30 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 3 Worker Ant 2016-12-15 06:19:05 UTC
REVIEW: http://review.gluster.org/11333 (glfsheal: Explicitly enable self-heal xlator options) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 4 Worker Ant 2016-12-15 15:46:29 UTC
COMMIT: http://review.gluster.org/11333 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 209c2d447be874047cb98d86492b03fa807d1832
Author: Ravishankar N <ravishankar>
Date:   Wed Dec 14 22:48:20 2016 +0530

    glfsheal: Explicitly enable self-heal xlator options
    
    Enable data, metadata and entry self-heal as xlator-options so that glfs-heal.c
    can heal split-brain files even if they are disabled on the volume via volume
    set commands.
    
    Change-Id: Ic191a1017131db1ded94d97c932079d7bfd79457
    BUG: 1234054
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/11333
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Tested-by: Pranith Kumar Karampuri <pkarampu>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 5 Shyamsundar 2017-03-06 17:20:01 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.