Steps to Reproduce: 1. Create 8+4 volume and fuse mount on the client 2. Create files with varying block sizes and random data 4. Calculate md5sum of all the files 3. Take down 1 to 5 bricks one after another and compute md5sum of files every time the brick is down 4. Compare md5sum of files before taking down the brick and after taking down the bricks. 5. Actual results: =============== Corruption Expected results: ================= On the same mount, md5sum should match --- Additional comment from Pranith Kumar K on 2015-06-11 11:28:00 EDT --- The command to be used to compute the md5sums is "for i in {1..100}; do md5sum dir.1/testfile.$i >> md5sum.txt; done" If we compute this with taking bricks down, sometimes the file md5sum.txt will not have the content it is supposed to have.
REVIEW: http://review.gluster.org/11531 (cluster/ec: Don't read from bad subvols) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/11531 (cluster/ec: Don't read from bad subvols) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/11580 (cluster/ec: Don't read from bricks that are healing) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/11640 (cluster/ec: Prevent data corruptions) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/11640 committed in master by Xavier Hernandez (xhernandez) ------ commit 34e65c4b3aac3cbe80ec336c367b78b01376a7a3 Author: Pranith Kumar K <pkarampu> Date: Mon Jul 13 00:53:20 2015 +0530 cluster/ec: Prevent data corruptions - On lock reuse preserve 'healing' bits - Don't set ctx->size outside locks in healing code - Allow xattrop internal fops also on the fop->mask. Change-Id: I6b76da5d7ebe367d8f3552cbf9fd18e556f2a171 BUG: 1232678 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/11640 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Xavier Hernandez <xhernandez>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user