Bug 1255604 - Not able to recover the corrupted file on Replica volume
Summary: Not able to recover the corrupted file on Replica volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: bitrot
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Raghavendra Bhat
QA Contact:
bugs@gluster.org
URL:
Whiteboard:
Depends On: 1238171 1238188 1266014 1266015
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-08-21 06:22 UTC by Raghavendra Bhat
Modified: 2015-12-01 16:45 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.4
Clone Of: 1238188
Environment:
Last Closed: 2015-09-09 09:40:09 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2015-08-21 06:22:15 UTC
+++ This bug was initially created as a clone of Bug #1238188 +++

+++ This bug was initially created as a clone of Bug #1238171 +++

Description of problem:
=======================
Not able to recover the corrupted file on Replica volume 


Version-Release number of selected component (if applicable):
==========

How reproducible:


Steps to Reproduce:
==========================
1.Create 1X2 volume and enable bitrot, once the file is signed modify the file directly from the brick on any of the node
2.Once scrubber marks the file as bad , trying to recover the file by running following steps

3. Get the gfid of the corrupted file by running the getfattr -d -m . -e hex <filename>
4. Delete the corrupted file directly from the back-end
5. Go to the /brick/.glsuterfs and delete the gfid file
6. From the FUSE mount access the corrupted file 
7. Run gluster volume heal volname to get the deleted corrupted file

Actual results:
Self heal is failing 

Expected results:
User should be able to recover the bad file

--- Additional comment from Raghavendra Bhat on 2015-07-02 04:55:39 EDT ---

Here it seems, the entry is deleted from the backend. But the in memory inode is still there in the inode table. Upon deletion of the entry, the next lookup fails and it unlinks the dentry. But the inode associated with it is still there in the inode table and that inode has marked the object as bad in its context. So, any kind of self-heal operation is denied by bit-rot-stub as it does not allow read/write operations on a bad object.

--- Additional comment from Raghavendra Bhat on 2015-08-20 06:43:46 EDT ---

http://review.gluster.org/#/c/11489/5 has been submitted for review.

--- Additional comment from Anand Avati on 2015-08-20 06:59:32 EDT ---

REVIEW: http://review.gluster.org/11489 (protocol/server: forget the inodes which got ENOENT in lookup) posted (#6) for review on master by Raghavendra Bhat (raghavendra)

--- Additional comment from Raghavendra Bhat on 2015-08-21 02:21:42 EDT ---

http://review.gluster.org/11489 has been merged. Moving it to MODIFIED.

Comment 1 Anand Avati 2015-08-21 06:27:47 UTC
REVIEW: http://review.gluster.org/11973 (protocol/server: forget the inodes which got ENOENT in lookup) posted (#1) for review on release-3.7 by Raghavendra Bhat (raghavendra)

Comment 2 Anand Avati 2015-08-21 11:48:59 UTC
COMMIT: http://review.gluster.org/11973 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit cb879d6adbb9194b488f2ad7a97cf7fc7f5a5ef5
Author: Raghavendra Bhat <raghavendra>
Date:   Wed Jul 1 15:56:58 2015 +0530

    protocol/server: forget the inodes which got ENOENT in lookup
    
                     Backport of http://review.gluster.org/11489
    
    If a looked up object is removed from the backend, then upon getting a
    revalidated lookup on that object ENOENT error is received. protocol/server
    xlator handles it by removing dentry upon which ENOENT is received. But the
    inode associated with it still remains in the inode table, and whoever does
    nameless lookup on the gfid of that object will be able to do it successfully
    despite the object being not present.
    
    For handling this issue, upon getting ENOENT on a looked up entry in revalidate
    lookups, protocol/server should forget the inode as well.
    
    Though removing files directly from the backend is not allowed, in case of
    objects corrupted due to bitrot and marked as bad by scrubber, objects are
    removed directly from the backend in case of replicate volumes, so that the
    object is healed from the good copy. For handling this, the inode of the bad
    object removed from the backend should be forgotten. Otherwise, the inode which
    knows the object it represents is bad, does not allow read/write operations
    happening as part of self-heal.
    
    Change-Id: I268eeaf37969458687425187be6622347a6cc1f1
    BUG: 1255604
    Signed-off-by: Raghavendra Bhat <raghavendra>
    Reviewed-on: http://review.gluster.org/11973
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 3 Kaushal 2015-09-09 09:40:09 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.4, please open a new bug report.

glusterfs-3.7.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12496
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.