+++ This bug was initially created as a clone of Bug #1238171 +++ Description of problem: ======================= Not able to recover the corrupted file on Replica volume Version-Release number of selected component (if applicable): ========== How reproducible: Steps to Reproduce: ========================== 1.Create 1X2 volume and enable bitrot, once the file is signed modify the file directly from the brick on any of the node 2.Once scrubber marks the file as bad , trying to recover the file by running following steps 3. Get the gfid of the corrupted file by running the getfattr -d -m . -e hex <filename> 4. Delete the corrupted file directly from the back-end 5. Go to the /brick/.glsuterfs and delete the gfid file 6. From the FUSE mount access the corrupted file 7. Run gluster volume heal volname to get the deleted corrupted file Actual results: Self heal is failing Expected results: User should be able to recover the bad file
Here it seems, the entry is deleted from the backend. But the in memory inode is still there in the inode table. Upon deletion of the entry, the next lookup fails and it unlinks the dentry. But the inode associated with it is still there in the inode table and that inode has marked the object as bad in its context. So, any kind of self-heal operation is denied by bit-rot-stub as it does not allow read/write operations on a bad object.
http://review.gluster.org/#/c/11489/5 has been submitted for review.
REVIEW: http://review.gluster.org/11489 (protocol/server: forget the inodes which got ENOENT in lookup) posted (#6) for review on master by Raghavendra Bhat (raghavendra)
http://review.gluster.org/11489 has been merged. Moving it to MODIFIED.
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user