Bug 1232678 - Disperse volume : data corruption with appending writes in 8+4 config
Summary: Disperse volume : data corruption with appending writes in 8+4 config
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1230513
Blocks: 1223636 1243647
TreeView+ depends on / blocked
 
Reported: 2015-06-17 09:43 UTC by Pranith Kumar K
Modified: 2016-06-16 13:13 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1230513
: 1243647 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:13:39 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Comment 1 Pranith Kumar K 2015-06-17 09:44:50 UTC
Steps to Reproduce:
1. Create 8+4 volume and fuse mount on the client
2. Create files with varying block sizes and random data
4. Calculate md5sum of all the files
3. Take down 1 to 5 bricks one after another and compute md5sum of files every time the brick is down
4. Compare md5sum of files before taking down the brick and after taking down the bricks.
5. 

Actual results:
===============
Corruption

Expected results: 
=================
On the same mount, md5sum should match

--- Additional comment from Pranith Kumar K on 2015-06-11 11:28:00 EDT ---

The command to be used to compute the md5sums is "for i in {1..100}; do md5sum dir.1/testfile.$i >> md5sum.txt; done" If we compute this with taking bricks down, sometimes the file md5sum.txt will not have the content it is supposed to have.

Comment 2 Anand Avati 2015-07-03 17:13:57 UTC
REVIEW: http://review.gluster.org/11531 (cluster/ec: Don't read from bad subvols) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 3 Anand Avati 2015-07-06 09:51:51 UTC
REVIEW: http://review.gluster.org/11531 (cluster/ec: Don't read from bad subvols) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 4 Anand Avati 2015-07-08 12:36:47 UTC
REVIEW: http://review.gluster.org/11580 (cluster/ec: Don't read from bricks that are healing) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 5 Anand Avati 2015-07-12 19:37:28 UTC
REVIEW: http://review.gluster.org/11640 (cluster/ec: Prevent data corruptions) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 6 Anand Avati 2015-07-14 07:24:13 UTC
COMMIT: http://review.gluster.org/11640 committed in master by Xavier Hernandez (xhernandez@datalab.es) 
------
commit 34e65c4b3aac3cbe80ec336c367b78b01376a7a3
Author: Pranith Kumar K <pkarampu@redhat.com>
Date:   Mon Jul 13 00:53:20 2015 +0530

    cluster/ec: Prevent data corruptions
    
    - On lock reuse preserve 'healing' bits
    - Don't set ctx->size outside locks in healing code
    - Allow xattrop internal fops also on the fop->mask.
    
    Change-Id: I6b76da5d7ebe367d8f3552cbf9fd18e556f2a171
    BUG: 1232678
    Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
    Reviewed-on: http://review.gluster.org/11640
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>

Comment 7 Nagaprasad Sathyanarayana 2015-10-25 14:45:46 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 8 Niels de Vos 2016-06-16 13:13:39 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.