Bug 1313131

Summary: [New] - quarantine folder becomes empty and bitrot status does not list any files which are corrupted
Product: [Community] GlusterFS Reporter: Venky Shankar <vshankar>
Component: bitrotAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact: bugs <bugs>
Priority: unspecified    
Version: 3.7.8CC: bugs, byarlaga, khiremat, knarra, rhs-bugs, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.9 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1308961 Environment:
Last Closed: 2016-04-19 07:21:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1306907, 1308961, 1313923    
Bug Blocks: 1299184, 1309567    

Comment 1 Vijay Bellur 2016-03-01 03:22:40 UTC
REVIEW: http://review.gluster.org/13552 (features/bitrot: do not remove the quarantine handle in forget) posted (#1) for review on release-3.7 by Venky Shankar (vshankar)

Comment 2 Vijay Bellur 2016-03-02 11:20:20 UTC
REVIEW: http://review.gluster.org/13552 (features/bitrot: do not remove the quarantine handle in forget) posted (#2) for review on release-3.7 by Venky Shankar (vshankar)

Comment 3 Vijay Bellur 2016-03-07 13:07:58 UTC
COMMIT: http://review.gluster.org/13552 committed in release-3.7 by Venky Shankar (vshankar) 
------
commit fea3a6d1258a700043cc37ec35c8ffdbd1aefa41
Author: Raghavendra Bhat <raghavendra>
Date:   Tue Feb 16 20:22:36 2016 -0500

    features/bitrot: do not remove the quarantine handle in forget
    
    If an object is marked as bad, then an entry is corresponding to the
    bad object is created in the .glusterfs/quarantine directory to help
    scrub status. The entry name is the gfid of the corrupted object.
    The quarantine handle is removed in below 2 cases.
    
    1) When protocol/server revceives the -ve lookup on an entry whose inode
       is there in the inode table (it can happen when the corrupted object
       is deleted directly from the backend for recovery purpose) it sends a
       forget on the inode and bit-rot-stub removes the quarantine handle in
       upon getting the forget.
       refer to the below commit
       f853ed9c61bf65cb39f859470a8ffe8973818868:
       http://review.gluster.org/12743)
    
    2) When bit-rot-stub itself realizes that lookup on a corrupted object
       has failed with ENOENT.
    
    But with step1, there is a problem when the bit-rot-stub receives forget
    due to lru limit exceeding in the inode table. In such cases, though the
    corrupted object is not deleted (either from the mount point or from the
    backend), the handle in the quarantine directory is removed and that object
    is not shown in the bad objects list in the scrub status command.
    
    So it is better to follow only 2nd step (i.e. bit-rot-stub removing the handle
    from the quarantine directory in -ve lookups). Also the handle has to be removed
    when a corrupted object is unlinked from the mount point itself.
    
    Change-Id: Ibc3bbaf4bc8a5f8986085e87b729ab912cbf8cf9
    BUG: 1313131
    Original author: Raghavendra Bhat <raghavendra>
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/13472
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Venky Shankar <vshankar>
    (cherry picked from commit 2102010edab355ac9882eea41a46edaca8b9d02c)
    Reviewed-on: http://review.gluster.org/13552
    Tested-by: Venky Shankar <vshankar>

Comment 4 Kaushal 2016-04-19 07:21:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.

glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user