Description of problem: Reported by Manikandan Selvaganesh <mselvaga> How reproducible: Always Steps to Reproduce: 1.Create a 1x2 replica and mount it 2. Kill a brick 3. From the mount ,dd if=/dev/zero of=file bs=1024 count=40240 4. restart the brick, trigger heal 5. check disk usage (du -sh) of the bricks. Actual results: disk usage discrepancy Expected results: disk usage must be nearly identical.
REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#1) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#2) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#3) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#4) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/12371 (afr: write zeros to sink for non-sparse files) posted (#5) for review on master by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/12371 committed in master by Jeff Darcy (jdarcy) ------ commit 641b3a9164227db52df1aab05795c90d06b315f2 Author: Ravishankar N <ravishankar> Date: Wed Oct 21 21:05:46 2015 +0530 afr: write zeros to sink for non-sparse files Problem: If a file is created with zeroes ('dd', 'fallocate' etc.) when a brick is down, the self-heal does not write the zeroes to the sink after it comes up. Consequenty, there is a mismatch in disk-usage amongst the bricks of the replica. Fix: If we definitely know that the file is not sparse, then write the zeroes to the sink even if the checksums match. Change-Id: Ic739b3da5dbf47d99801c0e1743bb13aeb3af864 BUG: 1272460 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/12371 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user