Bug 1304951
Summary: | sync lost between two boards | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | xinsong <songxin_1980> |
Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> |
Status: | CLOSED EOL | QA Contact: | |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.7.6 | CC: | bugs |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | ppc | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-03-08 10:58:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
xinsong
2016-02-05 03:28:39 UTC
Could you also check what is the output of "gluster volume heal c_glusterfs info"? Are there any pending heals? Hi, Thanks for your reply. I have checked the output of the command , but there are no entries to be healed. /usr/sbin/gluster volume heal c_glusterfs info 0003: Brick 10.32.0.48:/opt/lvmdir/c2/brick 0003: Number of entries: 0 0003: Brick 10.32.1.144:/opt/lvmdir/c2/brick 0003: Number of entries: 0 ============================================================================================= /usr/sbin/gluster volume heal c_glusterfs info 0025: Brick 10.32.0.48:/opt/lvmdir/c2/brick 0025: Number of entries: 0 0025: Brick 10.32.1.144:/opt/lvmdir/c2/brick 0025: Number of entries: 0 Also , The logging file which is updated is of fixed size and the new entries will be wrapped ,overwriting the old entries. This way we have seen that after few restarts , the contents of the same file on two bricks are different , but the volume heal info shows zero entries. Thanks, Xin This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release. |