Bug 1375098 - Value of `replica.split-brain-status' attribute of a directory in metadata split-brain in a dist-rep volume reads that it is not in split-brain
Summary: Value of `replica.split-brain-status' attribute of a directory in metadata sp...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.8.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
Depends On: 1260779 1375099
Blocks: 1255689 1368312
TreeView+ depends on / blocked
 
Reported: 2016-09-12 07:46 UTC by Mohit Agrawal
Modified: 2016-10-20 14:02 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.8.5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1260779
Environment:
Last Closed: 2016-10-20 14:02:57 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Mohit Agrawal 2016-09-12 07:46:10 UTC
+++ This bug was initially created as a clone of Bug #1260779 +++

Description of problem:
-----------------------

In a distribute-replicate volume, the `replica.split-brain-status' attribute of a directory in metadata split-brain reports that the file is not in split-brain.

For e.g. the output of `gluster volume heal info' reports that a directory is in split-brain but the `replica.split-brain-status' reports that the file is not in split-brain -

On the server -

# gluster v heal 2-test info
Brick server1:/rhs/brick1/b1/
/dir - Is in split-brain

Number of entries: 1

Brick server2:/rhs/brick1/b1/
/dir - Is in split-brain

Number of entries: 1

Brick server3:/rhs/brick1/b1/
Number of entries: 0

Brick server4:/rhs/brick1/b1/
Number of entries: 0

On the client -

# getfattr -n replica.split-brain-status dir
# file: dir
replica.split-brain-status="The file is not under data or metadata split-brain"

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
glusterfs-3.7.1-14.el7rhgs.x86_64

How reproducible:
------------------
100%

Steps to Reproduce:
-------------------
1. Create a directory using a fuse client in a distribute-replicate volume.
2. Kill one brick of one of the replica sets in the volume and modify the permissions of the directory.
3. Start volume with force option.
4. Kill the other brick in the same replica set and modify permissions of the directory again.
5. Start volume with force option. Examine the output of `gluster volume heal <vol-name> info' command on the server and the output of `getfattr -n replica.split-brain-status <path-to-dir>' on the client.

Actual results:
---------------
`getfattr -n replica.split-brain-status <path-to-dir>' reports that the file is not in split-brain even though it is in split-brain.

Expected results:
-----------------
The value of `replica.split-brain-status' attribute should read that the file is in metadata split-brain.

--- Additional comment from Anjana Suparna Sriram on 2015-09-18 05:53:18 EDT ---

Hi Pranith,

Could you please review the edited doc text and sign off to be included in the Known Issues chapter.

Regards,
Anjana

--- Additional comment from Pranith Kumar K on 2015-09-18 06:27:07 EDT ---

hi Anjana,
     This feature is Anuradha's baby. I changed Needinfo to Anuradha.

Pranith

Comment 1 Worker Ant 2016-09-12 07:59:03 UTC
REVIEW: http://review.gluster.org/15467 (dht: "replica.split-brain-status" attribute value is not correct) posted (#1) for review on release-3.8 by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 2 Worker Ant 2016-09-12 08:31:04 UTC
REVIEW: http://review.gluster.org/15467 (dht: "replica.split-brain-status" attribute value is not correct) posted (#2) for review on release-3.8 by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 3 Worker Ant 2016-09-27 06:49:49 UTC
COMMIT: http://review.gluster.org/15467 committed in release-3.8 by Raghavendra G (rgowdapp@redhat.com) 
------
commit a0e38b6e0ab67941d9405d4a12d63096bdb1b7a4
Author: Mohit Agrawal <moagrawa@redhat.com>
Date:   Fri Aug 19 10:33:50 2016 +0530

    dht: "replica.split-brain-status" attribute value is not correct
    
    Problem: In a distributed-replicate volume attribute
             "replica.split-brain-status" value does not display split-brain
               condition though directory is in split-brain.
             If directory is in split brain on mutiple replica-pairs
             it does not show full list of replica pairs.
    
    Solution: Update the dht_aggregate code to aggregate the xattr
              value in this specific condition.
    
    Fix:      1) function getChoices returns the choices from split-brain
                 status string.
              2) function add_opt adding the choices to local buffer to
                 store in dictionary
              3) For the key "replica.split-brain-status" function dht_aggregate
                 call dht_aggregate_split_brain_xattr to prepare the list.
    
    Test:     To verify the patch followed below steps
              1) Create a distributed replica volume and create mount point
              2) Stop heal daemon
              3) Touch file and directories on mount point
                 mkdir test{1..5};touch tmp{1..5}
              4) Down brick process on one of the replica set
                 pkill -9 glusterfsd
              5) Change permission of dir on mount point
                 chmod 755 test{1..5}
              6) Restart brick process on node with force option
              7) kill brick process on other node in same replica set
              8) Change permission of dir again on mount point
                 chmod 766 test{1..5}
              9) Reexecute same step from 4-9 on other replica set also
              10) After check heal status on server it will show dir's are
                  in split brain on all replica sets
              11) After check the replica.split-brain-status attr on mount
                  point it will show wrong status of split brain.
              12) After apply the patch the attribute shows correct value.
    
    > Change-Id: Icdfd72005a4aa82337c342762775a3d1761bbe4a
    > Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
    > Reviewed-on: http://review.gluster.org/15201
    > Smoke: Gluster Build System <jenkins@build.gluster.org>
    > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    > Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
    > (cherry picked from commit c4e9ec653c946002ab6d4c71ee8e6df056438a04)
    
    Change-Id: I85a5ae60189066d9e80799f00f1352c2f33ef4f8
    Backport of commit c4e9ec653c946002ab6d4c71ee8e6df056438a04
    BUG: 1375098
    Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
    Reviewed-on: http://review.gluster.org/15467
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Raghavendra G <rgowdapp@redhat.com>

Comment 4 Niels de Vos 2016-10-20 14:02:57 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.5, please open a new bug report.

glusterfs-3.8.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/announce/2016-October/000061.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.