Bug 1368312 - Value of `replica.split-brain-status' attribute of a directory in metadata split-brain in a dist-rep volume reads that it is not in split-brain
Summary: Value of `replica.split-brain-status' attribute of a directory in metadata sp...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
Depends On: 1260779 1375098 1375099
Blocks: 1255689
TreeView+ depends on / blocked
 
Reported: 2016-08-19 04:37 UTC by Mohit Agrawal
Modified: 2017-03-06 17:22 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.10.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1260779
Environment:
Last Closed: 2017-03-06 17:22:51 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Mohit Agrawal 2016-08-19 04:37:23 UTC
+++ This bug was initially created as a clone of Bug #1260779 +++

Description of problem:
-----------------------

In a distribute-replicate volume, the `replica.split-brain-status' attribute of a directory in metadata split-brain reports that the file is not in split-brain.

For e.g. the output of `gluster volume heal info' reports that a directory is in split-brain but the `replica.split-brain-status' reports that the file is not in split-brain -

On the server -

# gluster v heal 2-test info
Brick server1:/rhs/brick1/b1/
/dir - Is in split-brain

Number of entries: 1

Brick server2:/rhs/brick1/b1/
/dir - Is in split-brain

Number of entries: 1

Brick server3:/rhs/brick1/b1/
Number of entries: 0

Brick server4:/rhs/brick1/b1/
Number of entries: 0

On the client -

# getfattr -n replica.split-brain-status dir
# file: dir
replica.split-brain-status="The file is not under data or metadata split-brain"

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
glusterfs-3.7.1-14.el7rhgs.x86_64

How reproducible:
------------------
100%

Steps to Reproduce:
-------------------
1. Create a directory using a fuse client in a distribute-replicate volume.
2. Kill one brick of one of the replica sets in the volume and modify the permissions of the directory.
3. Start volume with force option.
4. Kill the other brick in the same replica set and modify permissions of the directory again.
5. Start volume with force option. Examine the output of `gluster volume heal <vol-name> info' command on the server and the output of `getfattr -n replica.split-brain-status <path-to-dir>' on the client.

Actual results:
---------------
`getfattr -n replica.split-brain-status <path-to-dir>' reports that the file is not in split-brain even though it is in split-brain.

Expected results:
-----------------
The value of `replica.split-brain-status' attribute should read that the file is in metadata split-brain.

--- Additional comment from Anjana Suparna Sriram on 2015-09-18 05:53:18 EDT ---

Hi Pranith,

Could you please review the edited doc text and sign off to be included in the Known Issues chapter.

Regards,
Anjana

--- Additional comment from Pranith Kumar K on 2015-09-18 06:27:07 EDT ---

hi Anjana,
     This feature is Anuradha's baby. I changed Needinfo to Anuradha.

Pranith

Comment 1 Vijay Bellur 2016-08-19 05:42:54 UTC
REVIEW: http://review.gluster.org/15201 (dht: "replica.split-brain-status" attribute value is not correct) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 2 Worker Ant 2016-08-22 06:23:36 UTC
REVIEW: http://review.gluster.org/15201 (dht: "replica.split-brain-status" attribute value is not correct) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 3 Worker Ant 2016-08-23 06:31:23 UTC
REVIEW: http://review.gluster.org/15201 (dht: "replica.split-brain-status" attribute value is not correct) posted (#3) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 4 Worker Ant 2016-08-23 15:33:57 UTC
REVIEW: http://review.gluster.org/15201 (dht: "replica.split-brain-status" attribute value is not correct) posted (#4) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 5 Worker Ant 2016-08-24 05:49:02 UTC
REVIEW: http://review.gluster.org/15201 (dht: "replica.split-brain-status" attribute value is not correct) posted (#5) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 6 Worker Ant 2016-08-24 06:06:25 UTC
REVIEW: http://review.gluster.org/15201 (dht: "replica.split-brain-status" attribute value is not correct) posted (#6) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 7 Worker Ant 2016-08-25 03:21:24 UTC
REVIEW: http://review.gluster.org/15201 (dht: "replica.split-brain-status" attribute value is not correct) posted (#7) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 8 Worker Ant 2016-08-25 05:46:05 UTC
REVIEW: http://review.gluster.org/15201 (dht: "replica.split-brain-status" attribute value is not correct) posted (#8) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 9 Worker Ant 2016-09-08 10:37:09 UTC
REVIEW: http://review.gluster.org/15201 (dht: "replica.split-brain-status" attribute value is not correct) posted (#9) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 10 Worker Ant 2016-09-12 07:38:33 UTC
COMMIT: http://review.gluster.org/15201 committed in master by Raghavendra G (rgowdapp) 
------
commit c4e9ec653c946002ab6d4c71ee8e6df056438a04
Author: Mohit Agrawal <moagrawa>
Date:   Fri Aug 19 10:33:50 2016 +0530

    dht: "replica.split-brain-status" attribute value is not correct
    
    Problem: In a distributed-replicate volume attribute
             "replica.split-brain-status" value does not display split-brain
               condition though directory is in split-brain.
             If directory is in split brain on mutiple replica-pairs
             it does not show full list of replica pairs.
    
    Solution: Update the dht_aggregate code to aggregate the xattr
              value in this specific condition.
    
    Fix:      1) function getChoices returns the choices from split-brain
                 status string.
              2) function add_opt adding the choices to local buffer to
                 store in dictionary
              3) For the key "replica.split-brain-status" function dht_aggregate
                 call dht_aggregate_split_brain_xattr to prepare the list.
    
    Test:     To verify the patch followed below steps
              1) Create a distributed replica volume and create mount point
              2) Stop heal daemon
              3) Touch file and directories on mount point
                 mkdir test{1..5};touch tmp{1..5}
              4) Down brick process on one of the replica set
                 pkill -9 glusterfsd
              5) Change permission of dir on mount point
                 chmod 755 test{1..5}
              6) Restart brick process on node with force option
              7) kill brick process on other node in same replica set
              8) Change permission of dir again on mount point
                 chmod 766 test{1..5}
              9) Reexecute same step from 4-9 on other replica set also
              10) After check heal status on server it will show dir's are
                  in split brain on all replica sets
              11) After check the replica.split-brain-status attr on mount
                  point it will show wrong status of split brain.
              12) After apply the patch the attribute shows correct value.
    
    BUG: 1368312
    Change-Id: Icdfd72005a4aa82337c342762775a3d1761bbe4a
    Signed-off-by: Mohit Agrawal <moagrawa>
    Reviewed-on: http://review.gluster.org/15201
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 11 Shyamsundar 2017-03-06 17:22:51 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.