Bug 1304951 - sync lost between two boards
Summary: sync lost between two boards
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.7.6
Hardware: ppc
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-05 03:28 UTC by xinsong
Modified: 2017-03-08 10:58 UTC (History)
1 user (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-03-08 10:58:00 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description xinsong 2016-02-05 03:28:39 UTC
Description of problem:
In replica volume file is out-sync after reboot node.

Version-Release number of selected component (if applicable):
glusterfs 3.7.6


How reproducible:

Steps to Reproduce:
1.Create a volume consisting of two bricks as replica type. 
2.There is a mount point to the volume in each brick. 
3.The two brick write the same logging file by the mount point of their own. 
4.Reboot two nodes one by one. May be reboot A node firstly and B node continued to write the logging file through their mount point. 
5.After two node startup, the logging file is quite different looked by the mount point of their own.(Does its format corrupted means different that is not in the sync?) 
6.heal is not work in this case. 


 

Actual results:
The file is out-sync after reboot node one by one.



Additional info:

I have checked the outputs of some gluster commands to see if the split brain entries are present , but I don't see any significant outcome from that. 


============================================================================================= 
$ lhsh 000300 /usr/sbin/gluster volume heal c_glusterfs info split-brain 
0003: Brick 192.32.0.48:/opt/lvmdir/c2/brick 
0003: Number of entries in split-brain: 0 
0003: Brick 192.32.1.144:/opt/lvmdir/c2/brick 
0003: Number of entries in split-brain: 0 
============================================================================================= 
$ lhsh 002500 /usr/sbin/gluster volume heal c_glusterfs info split-brain 
0025: Brick 192.32.0.48:/opt/lvmdir/c2/brick 
0025: Number of entries in split-brain: 0 
0025: Brick 192.32.1.144:/opt/lvmdir/c2/brick 
0025: Number of entries in split-brain: 0 
$

Comment 2 Pranith Kumar K 2016-02-05 04:53:13 UTC
Could you also check what is the output of "gluster volume heal c_glusterfs info"? Are there any pending heals?

Comment 3 xinsong 2016-02-14 02:14:08 UTC
Hi,
Thanks for your reply. I have checked the output of the command , but there are no entries to be healed.


/usr/sbin/gluster volume heal c_glusterfs info 

0003: Brick 10.32.0.48:/opt/lvmdir/c2/brick 
0003: Number of entries: 0 
0003: Brick 10.32.1.144:/opt/lvmdir/c2/brick 
0003: Number of entries: 0 
============================================================================================= 

/usr/sbin/gluster volume heal c_glusterfs info 
0025: Brick 10.32.0.48:/opt/lvmdir/c2/brick 
0025: Number of entries: 0 
0025: Brick 10.32.1.144:/opt/lvmdir/c2/brick 
0025: Number of entries: 0  

Also , The logging file which is updated is of fixed size and the new entries will be wrapped ,overwriting the old entries. 

This way we have seen that after few restarts , the contents of the same file on two bricks are different , but the volume heal info shows zero entries.

Thanks,
Xin

Comment 4 Kaushal 2017-03-08 10:58:00 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.