Bug 878881 - Split-brain logging is confusing
Split-brain logging is confusing
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
2.0
All All
low Severity low
: ---
: ---
Assigned To: Anuradha
spandura
:
Depends On: 864963 871987
Blocks:
  Show dependency treegraph
 
Reported: 2012-11-21 08:02 EST by Vidya Sakar
Modified: 2016-09-19 22:00 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 871987
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vidya Sakar 2012-11-21 08:02:25 EST
+++ This bug was initially created as a clone of Bug #871987 +++

Description of problem:

 Currently, split-brain logs are not cleared after resolution. This may (and does) confuse users who manually heal/fix split-brain'ed files but still see them in this log. Its equally confusing that if you miss files in this process and heal again, the split-brain list grows. 

How reproducible:

 Every time

Steps to Reproduce:
1. Create volume with replica 2
2. Cause split-brain on 1 file
3. Manually fix split-brain
4. Log entry still appears
  
Expected results:

 It would be great if the split-brain list was cleared when starting another heal. Or if the files could be removed from the list once the split-brain state was fixed manually.

--- Additional comment from Pranith Kumar K on 2012-11-02 03:25:39 EDT ---

hi,
  Could you explain what you mean by log. The one that is written to the file or the entry that appears in 'volume heal <volname> info split-brain'? Could you also let us know the version of gluster you are using.

Pranith

--- Additional comment from Joe Julian on 2012-11-02 10:11:35 EDT ---

This is referring to the 'volume heal <volname> info split-brain' output on any of the release-3.3 versions.
Comment 4 Scott Haines 2013-09-27 13:07:29 EDT
Targeting for 3.0.0 (Denali) release.
Comment 6 Vivek Agarwal 2014-04-07 07:40:48 EDT
Per bug triage, between dev, PM and QA, moving these out of denali
Comment 7 Anil Shah 2014-04-16 05:43:18 EDT
Though there are no heal pending "gluster v heal afrtest info split-brain" command shows multiple entries for split brain.


Steps to Reproduce:
======================
1. Create 3 x 2 distribute-replicate volume. Start the volume. 

2. Create fuse mount. 

3. Bring down brick5 offline. 

4. Create 10 files from mount point. 

5. Bring back brick5 online. 

6. Wait for self-heal to happen

7. Bring brick6 offline.

8. remove the distribute sub-volume-1 from the volume.

9. Wait for the migration to complete and then commit the remove-brick operation. 

10. Bring brick5 offline.

11. Bring back brick6 online. 

12. create files from mount point. 

13. Bring brick5 online. 

Actual results:
===============
"gluster v heal afrtest info split-brain ' shows split brains multiple entries.


Expected results:
=================
"gluster v heal afrtest info split-brain" show not show split brain entries.
Its should clear the split brain.

[root@rhsauto001 xattrop]# gluster v heal afrtest info split-brain
Gathering list of split brain entries on volume afrtest has been successful 

Brick 10.70.36.231:/rhs/brick2/b3
Number of entries: 0

Brick 10.70.36.233:/rhs/brick2/b4
Number of entries: 0

Brick 10.70.36.231:/rhs/brick2/b5
Number of entries: 5
at                    path on brick
-----------------------------------
2014-04-16 06:46:32 /
2014-04-16 06:46:33 /
2014-04-16 06:56:32 /
2014-04-16 07:06:32 /
2014-04-16 07:16:33 /

Brick 10.70.36.233:/rhs/brick2/b6
Number of entries: 5
at                    path on brick
-----------------------------------
2014-04-16 06:46:33 /
2014-04-16 06:52:57 /
2014-04-16 07:02:57 /
2014-04-16 07:12:57 /
2014-04-16 07:22:57 /
Comment 8 Nagaprasad Sathyanarayana 2014-05-06 06:34:54 EDT
BZs not targeted for Denali.
Comment 9 Vivek Agarwal 2015-03-23 03:40:03 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html
Comment 10 Vivek Agarwal 2015-03-23 03:40:33 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html

Note You need to log in before you can comment on or make changes to this bug.