Bug 1275921 - Disk usage mismatching after self-heal
Summary: Disk usage mismatching after self-heal
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.7.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1272460
Blocks: 1275907 glusterfs-3.7.6
TreeView+ depends on / blocked
 
Reported: 2015-10-28 06:43 UTC by Ravishankar N
Modified: 2015-11-17 06:01 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.7.6
Clone Of: 1272460
Environment:
Last Closed: 2015-11-17 06:01:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2015-10-28 06:43:01 UTC
+++ This bug was initially created as a clone of Bug #1272460 +++

Description of problem:
Reported by Manikandan Selvaganesh <mselvaga>

How reproducible:
Always

Steps to Reproduce:
1.Create a 1x2 replica and mount it
2. Kill a brick
3. From the mount ,dd if=/dev/zero of=file bs=1024 count=40240
4. restart the brick, trigger heal
5. check disk usage (du -sh) of the bricks.

Actual results:
disk usage discrepancy

Expected results:
disk usage must be nearly identical.

--- Additional comment from Vijay Bellur on 2015-10-16 08:59:47 EDT ---

REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#1) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-10-19 07:06:40 EDT ---

REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#2) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-10-21 12:40:51 EDT ---

REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#3) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-10-27 06:58:12 EDT ---

REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#4) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-10-27 20:44:24 EDT ---

REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#5) for review on master by Ravishankar N (ravishankar)

Comment 1 Vijay Bellur 2015-10-28 06:44:11 UTC
REVIEW: http://review.gluster.org/12436 (afr: write zeros to sink for non-sparse files) posted (#1) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 2 Vijay Bellur 2015-10-29 04:10:10 UTC
REVIEW: http://review.gluster.org/12436 (afr: write zeros to sink for non-sparse files) posted (#2) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 3 Vijay Bellur 2015-10-29 08:40:22 UTC
COMMIT: http://review.gluster.org/12436 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) 
------
commit 50646435b4076cfb30d7ebabf2d688f91c957cec
Author: Ravishankar N <ravishankar>
Date:   Wed Oct 21 21:05:46 2015 +0530

    afr: write zeros to sink for non-sparse files
    
    Backport of http://review.gluster.org/#/c/12371/
    Problem: If a file is created with zeroes ('dd', 'fallocate' etc.) when
    a brick is down, the self-heal does not write the zeroes to the sink
    after it comes up. Consequenty, there is a mismatch in disk-usage
    amongst the bricks of the replica.
    
    Fix: If we definitely know that the file is not sparse, then write the
    zeroes to the sink even if the checksums match.
    
    Change-Id: Ic739b3da5dbf47d99801c0e1743bb13aeb3af864
    BUG: 1275921
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/12436
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 4 Raghavendra Talur 2015-11-17 06:01:43 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report.

glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.