Bug 1030415

Summary: AFR : "volume heal <volume_name> info split-brain" not reporting all the hardlinks of the files which are in split-brain state
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: spandura
Component: replicateAssignee: Ravishankar N <ravishankar>
Status: CLOSED WONTFIX QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.1CC: pkarampu, ravishankar, rhs-bugs, storage-qa-internal, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-03-18 05:16:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 819514    

Description spandura 2013-11-14 11:51:19 UTC
Description of problem:
=========================
If a file is in split-brain state and the file has hardlinks, then the hardlinks are not reported in the output of "volume heal <volume_name> info split-brain" command execution. 

This information is necessary for resolving the split-brain on the bricks. All the hardlinks are to be deleted on the bricks in order to completely resolve the files from split-brain state. 

Version-Release number of selected component (if applicable):
================================================================
glusterfs 3.4.0.35.1u2rhs built on Oct 21 2013 14:00:58

How reproducible:
=================
Often

Steps to Reproduce:
=====================
1. Create a 1 x 3 replicate volume. Set "nfs.disable" to "on". Set "self-heal-daemon" to "off". Start the volume. 

2. Create 3 fuse mount points. Create a file "file1" from mount point. Create 2 hardlinks to the file. 

3. Bring down brick1 and brick2. 

4. From mount1 edit file "file1"

5. Bring down brick3. Bring back brick2. 

6. From mount2 edit file "file1"

7. Bring down brick2. bring back brick1. 

8. From mount3 edit file "file1"

9. set "self-heal-daemon" volume option to "on" 

10. Bring back brick2 and brick3. { Now the file is in data-split-brain state }

11. Execute: "gluster volume heal <volume_name> info split-brain" 

Actual results:
==================
root@rhs-client11 [Nov-14-2013-11:49:25] >gluster v heal vol_rep info split-brain
Gathering list of split brain entries on volume vol_rep has been successful 

Brick rhs-client11:/rhs/bricks/b1
Number of entries: 4
at                    path on brick
-----------------------------------
2013-11-14 11:24:54 /file1
2013-11-14 11:24:54 /file1
2013-11-14 11:34:54 /file1
2013-11-14 11:44:54 /file1

Brick rhs-client12:/rhs/bricks/b1-rep1
Number of entries: 5
at                    path on brick
-----------------------------------
2013-11-14 11:24:49 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:24:49 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:24:52 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:34:49 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:44:50 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>

Brick rhs-client13:/rhs/bricks/b1-rep2
Number of entries: 4
at                    path on brick
-----------------------------------
2013-11-14 11:24:51 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:24:54 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:34:48 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:44:48 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>

Expected results:
====================
Hardlinks also to be reported under split-brain command output.