Description of problem: ----------------------- In a distribute-replicate volume, the `replica.split-brain-status' attribute of a directory in metadata split-brain reports that the file is not in split-brain. For e.g. the output of `gluster volume heal info' reports that a directory is in split-brain but the `replica.split-brain-status' reports that the file is not in split-brain - On the server - # gluster v heal 2-test info Brick server1:/rhs/brick1/b1/ /dir - Is in split-brain Number of entries: 1 Brick server2:/rhs/brick1/b1/ /dir - Is in split-brain Number of entries: 1 Brick server3:/rhs/brick1/b1/ Number of entries: 0 Brick server4:/rhs/brick1/b1/ Number of entries: 0 On the client - # getfattr -n replica.split-brain-status dir # file: dir replica.split-brain-status="The file is not under data or metadata split-brain" Version-Release number of selected component (if applicable): ------------------------------------------------------------- glusterfs-3.7.1-14.el7rhgs.x86_64 How reproducible: ------------------ 100% Steps to Reproduce: ------------------- 1. Create a directory using a fuse client in a distribute-replicate volume. 2. Kill one brick of one of the replica sets in the volume and modify the permissions of the directory. 3. Start volume with force option. 4. Kill the other brick in the same replica set and modify permissions of the directory again. 5. Start volume with force option. Examine the output of `gluster volume heal <vol-name> info' command on the server and the output of `getfattr -n replica.split-brain-status <path-to-dir>' on the client. Actual results: --------------- `getfattr -n replica.split-brain-status <path-to-dir>' reports that the file is not in split-brain even though it is in split-brain. Expected results: ----------------- The value of `replica.split-brain-status' attribute should read that the file is in metadata split-brain.
Hi Pranith, Could you please review the edited doc text and sign off to be included in the Known Issues chapter. Regards, Anjana
hi Anjana, This feature is Anuradha's baby. I changed Needinfo to Anuradha. Pranith
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/101282/
Verified this BZ on glusterfs version 3.8.4-27.el7rhgs.x86_64. Followed the same steps as in the description, `replica.split-brain-status' attribute of the directory in metadata split-brain now reports that the file is in split-brain. Hence, moving this BZ to Verified. Console outputs: ================ [root@dhcp43-49 ~]# gluster v heal distrep info Brick 10.70.43.49:/bricks/brick0/b0 /bug_1 - Is in split-brain Status: Connected Number of entries: 1 Brick 10.70.43.41:/bricks/brick0/b0 /bug_1 - Is in split-brain Status: Connected Number of entries: 1 Brick 10.70.43.35:/bricks/brick0/b0 Status: Connected Number of entries: 0 Brick 10.70.43.37:/bricks/brick0/b0 Status: Connected Number of entries: 0 Brick 10.70.43.31:/bricks/brick0/b0 Status: Connected Number of entries: 0 Brick 10.70.43.49:/bricks/brick1/b1 Status: Connected Number of entries: 0 Brick 10.70.43.41:/bricks/brick1/b1 Status: Connected Number of entries: 0 Brick 10.70.43.35:/bricks/brick1/b1 Status: Connected Number of entries: 0 Brick 10.70.43.37:/bricks/brick1/b1 Status: Connected Number of entries: 0 Brick 10.70.43.31:/bricks/brick1/b1 Status: Connected Number of entries: 0 Brick 10.70.43.49:/bricks/brick2/b2 /bug_1 - Is in split-brain Status: Connected Number of entries: 1 Brick 10.70.43.41:/bricks/brick2/b2 /bug_1 - Is in split-brain Status: Connected Number of entries: 1 Brick 10.70.43.35:/bricks/brick2/b2 Status: Connected Number of entries: 0 Brick 10.70.43.37:/bricks/brick2/b2 Status: Connected Number of entries: 0 [root@dhcp41-254 fuse]# getfattr -n replica.split-brain-status bug_1/ # file: bug_1/ replica.split-brain-status="data-split-brain:no metadata-split-brain:yes Choices:distrep-client-10,distrep-client-11,distrep-client-0,distrep-client-1"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774