Bug 1260779 - Value of `replica.split-brain-status' attribute of a directory in metadata split-brain in a dist-rep volume reads that it is not in split-brain
Value of `replica.split-brain-status' attribute of a directory in metadata sp...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity medium
: ---
: RHGS 3.3.0
Assigned To: Mohit Agrawal
Prasad Desala
:
Depends On: 1448833
Blocks: 1417147 1255689 1368312 1375098 1375099
  Show dependency treegraph
 
Reported: 2015-09-07 13:40 EDT by Shruti Sampat
Modified: 2017-09-21 00:53 EDT (History)
11 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-19
Doc Type: Bug Fix
Doc Text:
The 'getfattr -n replica.split-brain-status <path-to-dir>' command now shows accurate split brain status.
Story Points: ---
Clone Of:
: 1368312 1375098 1375099 (view as bug list)
Environment:
Last Closed: 2017-09-21 00:25:52 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shruti Sampat 2015-09-07 13:40:22 EDT
Description of problem:
-----------------------

In a distribute-replicate volume, the `replica.split-brain-status' attribute of a directory in metadata split-brain reports that the file is not in split-brain.

For e.g. the output of `gluster volume heal info' reports that a directory is in split-brain but the `replica.split-brain-status' reports that the file is not in split-brain -

On the server -

# gluster v heal 2-test info
Brick server1:/rhs/brick1/b1/
/dir - Is in split-brain

Number of entries: 1

Brick server2:/rhs/brick1/b1/
/dir - Is in split-brain

Number of entries: 1

Brick server3:/rhs/brick1/b1/
Number of entries: 0

Brick server4:/rhs/brick1/b1/
Number of entries: 0

On the client -

# getfattr -n replica.split-brain-status dir
# file: dir
replica.split-brain-status="The file is not under data or metadata split-brain"

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
glusterfs-3.7.1-14.el7rhgs.x86_64

How reproducible:
------------------
100%

Steps to Reproduce:
-------------------
1. Create a directory using a fuse client in a distribute-replicate volume.
2. Kill one brick of one of the replica sets in the volume and modify the permissions of the directory.
3. Start volume with force option.
4. Kill the other brick in the same replica set and modify permissions of the directory again.
5. Start volume with force option. Examine the output of `gluster volume heal <vol-name> info' command on the server and the output of `getfattr -n replica.split-brain-status <path-to-dir>' on the client.

Actual results:
---------------
`getfattr -n replica.split-brain-status <path-to-dir>' reports that the file is not in split-brain even though it is in split-brain.

Expected results:
-----------------
The value of `replica.split-brain-status' attribute should read that the file is in metadata split-brain.
Comment 1 Anjana Suparna Sriram 2015-09-18 05:53:18 EDT
Hi Pranith,

Could you please review the edited doc text and sign off to be included in the Known Issues chapter.

Regards,
Anjana
Comment 2 Pranith Kumar K 2015-09-18 06:27:07 EDT
hi Anjana,
     This feature is Anuradha's baby. I changed Needinfo to Anuradha.

Pranith
Comment 6 Atin Mukherjee 2017-03-24 04:37:45 EDT
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/101282/
Comment 10 Prasad Desala 2017-06-12 01:56:07 EDT
Verified this BZ on glusterfs version 3.8.4-27.el7rhgs.x86_64. Followed the same steps as in the description, `replica.split-brain-status' attribute of the directory in metadata split-brain now reports that the file is in split-brain.

Hence, moving this BZ to Verified.

Console outputs:
================
[root@dhcp43-49 ~]# gluster v heal distrep info 
Brick 10.70.43.49:/bricks/brick0/b0
/bug_1 - Is in split-brain

Status: Connected
Number of entries: 1

Brick 10.70.43.41:/bricks/brick0/b0
/bug_1 - Is in split-brain

Status: Connected
Number of entries: 1

Brick 10.70.43.35:/bricks/brick0/b0
Status: Connected
Number of entries: 0

Brick 10.70.43.37:/bricks/brick0/b0
Status: Connected
Number of entries: 0

Brick 10.70.43.31:/bricks/brick0/b0
Status: Connected
Number of entries: 0

Brick 10.70.43.49:/bricks/brick1/b1
Status: Connected
Number of entries: 0

Brick 10.70.43.41:/bricks/brick1/b1
Status: Connected
Number of entries: 0

Brick 10.70.43.35:/bricks/brick1/b1
Status: Connected
Number of entries: 0

Brick 10.70.43.37:/bricks/brick1/b1
Status: Connected
Number of entries: 0

Brick 10.70.43.31:/bricks/brick1/b1
Status: Connected
Number of entries: 0

Brick 10.70.43.49:/bricks/brick2/b2
/bug_1 - Is in split-brain

Status: Connected
Number of entries: 1

Brick 10.70.43.41:/bricks/brick2/b2
/bug_1 - Is in split-brain

Status: Connected
Number of entries: 1

Brick 10.70.43.35:/bricks/brick2/b2
Status: Connected
Number of entries: 0

Brick 10.70.43.37:/bricks/brick2/b2
Status: Connected
Number of entries: 0

[root@dhcp41-254 fuse]# getfattr -n replica.split-brain-status bug_1/
# file: bug_1/
replica.split-brain-status="data-split-brain:no    metadata-split-brain:yes   Choices:distrep-client-10,distrep-client-11,distrep-client-0,distrep-client-1"
Comment 13 errata-xmlrpc 2017-09-21 00:25:52 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774
Comment 14 errata-xmlrpc 2017-09-21 00:53:56 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774

Note You need to log in before you can comment on or make changes to this bug.