Bug 1304951 - sync lost between two boards
sync lost between two boards
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: replicate (Show other bugs)
3.7.6
ppc Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Pranith Kumar K
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-04 22:28 EST by xinsong
Modified: 2017-03-08 05:58 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-08 05:58:00 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description xinsong 2016-02-04 22:28:39 EST
Description of problem:
In replica volume file is out-sync after reboot node.

Version-Release number of selected component (if applicable):
glusterfs 3.7.6


How reproducible:

Steps to Reproduce:
1.Create a volume consisting of two bricks as replica type. 
2.There is a mount point to the volume in each brick. 
3.The two brick write the same logging file by the mount point of their own. 
4.Reboot two nodes one by one. May be reboot A node firstly and B node continued to write the logging file through their mount point. 
5.After two node startup, the logging file is quite different looked by the mount point of their own.(Does its format corrupted means different that is not in the sync?) 
6.heal is not work in this case. 


 

Actual results:
The file is out-sync after reboot node one by one.



Additional info:

I have checked the outputs of some gluster commands to see if the split brain entries are present , but I don't see any significant outcome from that. 


============================================================================================= 
$ lhsh 000300 /usr/sbin/gluster volume heal c_glusterfs info split-brain 
0003: Brick 192.32.0.48:/opt/lvmdir/c2/brick 
0003: Number of entries in split-brain: 0 
0003: Brick 192.32.1.144:/opt/lvmdir/c2/brick 
0003: Number of entries in split-brain: 0 
============================================================================================= 
$ lhsh 002500 /usr/sbin/gluster volume heal c_glusterfs info split-brain 
0025: Brick 192.32.0.48:/opt/lvmdir/c2/brick 
0025: Number of entries in split-brain: 0 
0025: Brick 192.32.1.144:/opt/lvmdir/c2/brick 
0025: Number of entries in split-brain: 0 
$
Comment 2 Pranith Kumar K 2016-02-04 23:53:13 EST
Could you also check what is the output of "gluster volume heal c_glusterfs info"? Are there any pending heals?
Comment 3 xinsong 2016-02-13 21:14:08 EST
Hi,
Thanks for your reply. I have checked the output of the command , but there are no entries to be healed.


/usr/sbin/gluster volume heal c_glusterfs info 

0003: Brick 10.32.0.48:/opt/lvmdir/c2/brick 
0003: Number of entries: 0 
0003: Brick 10.32.1.144:/opt/lvmdir/c2/brick 
0003: Number of entries: 0 
============================================================================================= 

/usr/sbin/gluster volume heal c_glusterfs info 
0025: Brick 10.32.0.48:/opt/lvmdir/c2/brick 
0025: Number of entries: 0 
0025: Brick 10.32.1.144:/opt/lvmdir/c2/brick 
0025: Number of entries: 0  

Also , The logging file which is updated is of fixed size and the new entries will be wrapped ,overwriting the old entries. 

This way we have seen that after few restarts , the contents of the same file on two bricks are different , but the volume heal info shows zero entries.

Thanks,
Xin
Comment 4 Kaushal 2017-03-08 05:58:00 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.