Bug 1359681 - [disperse] Data gain while brick is down and rename a file
Summary: [disperse] Data gain while brick is down and rename a file
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
Assignee: Ashish Pandey
QA Contact:
URL:
Whiteboard:
: 1642638 1698861 (view as bug list)
Depends On:
Blocks: 1765114 1786713
TreeView+ depends on / blocked
 
Reported: 2016-07-25 09:56 UTC by Ashish Pandey
Modified: 2020-08-06 07:48 UTC (History)
8 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2020-03-12 13:01:09 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:
aspandey: needinfo-


Attachments (Terms of Use)

Description Ashish Pandey 2016-07-25 09:56:50 UTC
Description of problem:

Data gain is happening while brick is down and rename a file.
Bringing the brick up and shd is healing this file which leads to creation of the old file on all the bricks. On mount point also we can see the old file with same original size.


Version-Release number of selected component (if applicable):
[root@apandey glu]# gluster --version
glusterfs 3.9dev built on Jul 25 2016 15:09:43
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.


How reproducible:
100%

Steps to Reproduce:
1. Create a volume (4+2) and mount it.
2. Create a file , file.txt, on mount point.
3. Kill any one brick and rename file.txt to newfile.txt
4. Bring the brick up by force start of volume.
5. List files on mount point. we can see files with old name and new name.  

Actual results:
File with old name exist on mount point along with new name.

Expected results:

Only file with new name should exist on mount point. Also heal should happen successfully for the killed brick in which new file frgment should be create and old file should be removed.

Additional info:

Comment 1 Jeff Byers 2018-10-23 23:06:35 UTC
Note that this problem happens in glusterfs 3.12.14 as well.

Comment 2 Xavi Hernandez 2018-10-31 12:49:30 UTC
*** Bug 1642638 has been marked as a duplicate of this bug. ***

Comment 3 Amar Tumballi 2019-07-02 04:21:34 UTC
Still an issue? What should be we doing to fix it?

Comment 4 Ashish Pandey 2019-07-02 04:37:46 UTC
*** Bug 1698861 has been marked as a duplicate of this bug. ***

Comment 5 Ashish Pandey 2019-07-02 04:39:04 UTC
Yes, This has not been fixed yet and a design discussion is going on. There is a common approach which will be used for this as well as gfid split brain in afr
Solution could be complex and we are working on all the cases before finalizing design and implementation.

There is one more bug raised by Nithya, https://bugzilla.redhat.com/show_bug.cgi?id=1698861.
I am closing the other bug as duplicate to this.

Comment 6 Yaniv Kaul 2019-11-15 15:40:30 UTC
Status?

Comment 8 Ashish Pandey 2019-11-18 11:45:38 UTC
*** Bug 1757307 has been marked as a duplicate of this bug. ***

Comment 9 Worker Ant 2020-03-12 13:01:09 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/986, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.