Bug 1030415 - AFR : "volume heal <volume_name> info split-brain" not reporting all the hardlinks of the files which are in split-brain state
AFR : "volume heal <volume_name> info split-brain" not reporting all the hard...
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Ravishankar N
storage-qa-internal@redhat.com
:
Depends On:
Blocks: 819514
  Show dependency treegraph
 
Reported: 2013-11-14 06:51 EST by spandura
Modified: 2016-09-17 08:13 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-03-18 01:16:51 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description spandura 2013-11-14 06:51:19 EST
Description of problem:
=========================
If a file is in split-brain state and the file has hardlinks, then the hardlinks are not reported in the output of "volume heal <volume_name> info split-brain" command execution. 

This information is necessary for resolving the split-brain on the bricks. All the hardlinks are to be deleted on the bricks in order to completely resolve the files from split-brain state. 

Version-Release number of selected component (if applicable):
================================================================
glusterfs 3.4.0.35.1u2rhs built on Oct 21 2013 14:00:58

How reproducible:
=================
Often

Steps to Reproduce:
=====================
1. Create a 1 x 3 replicate volume. Set "nfs.disable" to "on". Set "self-heal-daemon" to "off". Start the volume. 

2. Create 3 fuse mount points. Create a file "file1" from mount point. Create 2 hardlinks to the file. 

3. Bring down brick1 and brick2. 

4. From mount1 edit file "file1"

5. Bring down brick3. Bring back brick2. 

6. From mount2 edit file "file1"

7. Bring down brick2. bring back brick1. 

8. From mount3 edit file "file1"

9. set "self-heal-daemon" volume option to "on" 

10. Bring back brick2 and brick3. { Now the file is in data-split-brain state }

11. Execute: "gluster volume heal <volume_name> info split-brain" 

Actual results:
==================
root@rhs-client11 [Nov-14-2013-11:49:25] >gluster v heal vol_rep info split-brain
Gathering list of split brain entries on volume vol_rep has been successful 

Brick rhs-client11:/rhs/bricks/b1
Number of entries: 4
at                    path on brick
-----------------------------------
2013-11-14 11:24:54 /file1
2013-11-14 11:24:54 /file1
2013-11-14 11:34:54 /file1
2013-11-14 11:44:54 /file1

Brick rhs-client12:/rhs/bricks/b1-rep1
Number of entries: 5
at                    path on brick
-----------------------------------
2013-11-14 11:24:49 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:24:49 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:24:52 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:34:49 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:44:50 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>

Brick rhs-client13:/rhs/bricks/b1-rep2
Number of entries: 4
at                    path on brick
-----------------------------------
2013-11-14 11:24:51 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:24:54 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:34:48 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>
2013-11-14 11:44:48 <gfid:20c8c555-1893-4366-8d84-8084a6d9f2dd>

Expected results:
====================
Hardlinks also to be reported under split-brain command output.

Note You need to log in before you can comment on or make changes to this bug.