Bug 1030415 - AFR : "volume heal <volume_name> info split-brain" not reporting all the hardlinks of the files which are in split-brain state
Summary: AFR : "volume heal <volume_name> info split-brain" not reporting all the hard...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Ravishankar N
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 819514
TreeView+ depends on / blocked
 
Reported: 2013-11-14 11:51 UTC by spandura
Modified: 2016-09-17 12:13 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-18 05:16:51 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2013-11-14 11:51:19 UTC
Description of problem:
=========================
If a file is in split-brain state and the file has hardlinks, then the hardlinks are not reported in the output of "volume heal <volume_name> info split-brain" command execution. 

This information is necessary for resolving the split-brain on the bricks. All the hardlinks are to be deleted on the bricks in order to completely resolve the files from split-brain state. 

Version-Release number of selected component (if applicable):
================================================================
glusterfs 3.4.0.35.1u2rhs built on Oct 21 2013 14:00:58

How reproducible:
=================
Often

Steps to Reproduce:
=====================
1. Create a 1 x 3 replicate volume. Set "nfs.disable" to "on". Set "self-heal-daemon" to "off". Start the volume. 

2. Create 3 fuse mount points. Create a file "file1" from mount point. Create 2 hardlinks to the file. 

3. Bring down brick1 and brick2. 

4. From mount1 edit file "file1"

5. Bring down brick3. Bring back brick2. 

6. From mount2 edit file "file1"

7. Bring down brick2. bring back brick1. 

8. From mount3 edit file "file1"

9. set "self-heal-daemon" volume option to "on" 

10. Bring back brick2 and brick3. { Now the file is in data-split-brain state }

11. Execute: "gluster volume heal <volume_name> info split-brain" 

Actual results:
==================
root@rhs-client11 [Nov-14-2013-11:49:25] >gluster v heal vol_rep info split-brain
Gathering list of split brain entries on volume vol_rep has been successful 

Brick rhs-client11:/rhs/bricks/b1
Number of entries: 4
at                    path on brick
-----------------------------------
2013-11-14 11:24:54 /file1
2013-11-14 11:24:54 /file1
2013-11-14 11:34:54 /file1
2013-11-14 11:44:54 /file1

Brick rhs-client12:/rhs/bricks/b1-rep1
Number of entries: 5
at                    path on brick
-----------------------------------
2013-11-14 11:24:49 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:24:49 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:24:52 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:34:49 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:44:50 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>

Brick rhs-client13:/rhs/bricks/b1-rep2
Number of entries: 4
at                    path on brick
-----------------------------------
2013-11-14 11:24:51 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:24:54 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:34:48 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:44:48 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>

Expected results:
====================
Hardlinks also to be reported under split-brain command output.


Note You need to log in before you can comment on or make changes to this bug.