+++ This bug was initially created as a clone of Bug #1285241 +++ +++ This bug was initially created as a clone of Bug #1285238 +++ Description of problem: When user deletes all the files from mount point scrub status output still shows the corrupted objects list and it does not get cleared. Due to this next time when scrubber runs and finds an object as bad it just appends the newly corrupted file to the existing list and wrong information is displayed in the output. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a volume and enable bitrot on the volume. 2. Fuse mount the volume and create some files 3. corrupt some files from the backend and wait for scrubber to mark those files as bad files. 4. Now run gluster volume bitrot <vol_name> scrub status command and user will be able to see all gfids of files which were deleted. 5. Now delete all the files from mount point and create new files. 6. corrput the newly created file and wait for scrubber to run Actual results: 1) Even after files are deleted from the mount point scrub status output still shows the gfids of the files. 2) Next time when scrubber runs list of corrupted files increases to old count + new count. Expected results: 1) Once the files are deleted from the mount point, scrub status should not show the non existing gfid . 2) If the newly created files gets corrupted, corrupted objects count should not appended to the old object count. Additional info: scurb status output before deleting the files: ================================================= [root@hostname b3]# gluster vol bitrot vol_dis_rep scrub status Volume name : vol_dis_rep State of scrub: Active Scrub impact: lazy Scrub frequency: hourly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: localhost Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f56046c2-1bf1-47f1-b3b4-40c40b116d3b 2883c58a-7b81-47ab-a83c-408fe23e9767 b3bfe63e-44fd-46e9-8e01-cf84af2f4f4e 5ddaf82c-1238-4af7-980a-332da1c3e47a f4151cef-285a-4d76-851e-ffeab7ca78cd c805d027-2a46-4c97-ad13-9ae919164fe6 ========================================================= Node name: 10.70.36.62 Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f47c1450-581b-42d5-ad9b-1955c9d4e7a0 50654920-8894-4a52-8a61-0a2862f012d9 3aa197cf-2c3a-4104-b0ac-0f36b7d86874 4ba25e47-4518-4497-b309-c8dfde6b9a43 bbf6d5b5-d66d-470e-a4ad-c163a4e23dd0 0453fa23-3143-45c5-82c9-803bbc2760ea ========================================================= scrub status output after deleting the files: =================================================== [root@hostname b3]# gluster vol bitrot vol_dis_rep scrub status Volume name : vol_dis_rep State of scrub: Active Scrub impact: lazy Scrub frequency: hourly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: localhost Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f56046c2-1bf1-47f1-b3b4-40c40b116d3b 2883c58a-7b81-47ab-a83c-408fe23e9767 b3bfe63e-44fd-46e9-8e01-cf84af2f4f4e 5ddaf82c-1238-4af7-980a-332da1c3e47a f4151cef-285a-4d76-851e-ffeab7ca78cd c805d027-2a46-4c97-ad13-9ae919164fe6 ========================================================= Node name: hostname2 Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f47c1450-581b-42d5-ad9b-1955c9d4e7a0 50654920-8894-4a52-8a61-0a2862f012d9 3aa197cf-2c3a-4104-b0ac-0f36b7d86874 4ba25e47-4518-4497-b309-c8dfde6b9a43 bbf6d5b5-d66d-470e-a4ad-c163a4e23dd0 0453fa23-3143-45c5-82c9-803bbc2760ea ========================================================= scurb status output after deleting old files and creating new files from the mount point: ========================================================== [root@hostname b3]# gluster vol bitrot vol_dis_rep scrub status Volume name : vol_dis_rep State of scrub: Active Scrub impact: lazy Scrub frequency: hourly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: localhost Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 7 Corrupted object's: f56046c2-1bf1-47f1-b3b4-40c40b116d3b 2883c58a-7b81-47ab-a83c-408fe23e9767 b3bfe63e-44fd-46e9-8e01-cf84af2f4f4e 5ddaf82c-1238-4af7-980a-332da1c3e47a f4151cef-285a-4d76-851e-ffeab7ca78cd c805d027-2a46-4c97-ad13-9ae919164fe6 e8561c6b-f881-499b-808b-7fa2bce190f7 ========================================================= Node name: hostname2 Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f47c1450-581b-42d5-ad9b-1955c9d4e7a0 50654920-8894-4a52-8a61-0a2862f012d9 3aa197cf-2c3a-4104-b0ac-0f36b7d86874 4ba25e47-4518-4497-b309-c8dfde6b9a43 bbf6d5b5-d66d-470e-a4ad-c163a4e23dd0 0453fa23-3143-45c5-82c9-803bbc2760ea ========================================================= --- Additional comment from Vijay Bellur on 2015-11-25 05:01:36 EST --- REVIEW: http://review.gluster.org/12743 (features/bit-rot-stub: delete the link for bad object in quarantine directory) posted (#1) for review on master by Raghavendra Bhat (raghavendra) --- Additional comment from Vijay Bellur on 2015-12-14 05:19:59 EST --- REVIEW: http://review.gluster.org/12743 (features/bit-rot-stub: delete the link for bad object in quarantine directory) posted (#2) for review on master by Raghavendra Bhat (raghavendra) --- Additional comment from Vijay Bellur on 2015-12-16 10:58:09 EST --- COMMIT: http://review.gluster.org/12743 committed in master by Venky Shankar (vshankar) ------ commit f853ed9c61bf65cb39f859470a8ffe8973818868 Author: Raghavendra Bhat <raghavendra> Date: Wed Nov 25 15:25:26 2015 +0530 features/bit-rot-stub: delete the link for bad object in quarantine directory When the bad object is deleted (as of now manually from the backend itself), along with its gfid handle, the entry for the bad object in the quarantne directory is left as it is (it also can be removed manually though). But the next lookup of the object upon not finding it in the backend, sends forget on the in-memory inode. If the stale link for the gfid still exists in the quarantine directory, bir-rot-stub will unlink the entry in its forget or in the next failed lookup on that object with errno being ENOENT. Change-Id: If84292d3e44707dfa11fa29023b3d9f691b8f0f3 BUG: 1285241 Signed-off-by: Raghavendra Bhat <raghavendra> Reviewed-on: http://review.gluster.org/12743 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Venky Shankar <vshankar> --- Additional comment from Vijay Bellur on 2015-12-20 22:52:14 EST --- REVIEW: http://review.gluster.org/13032 (features/bit-rot-stub: delete the link for bad object in quarantine directory) posted (#1) for review on release-3.7 by Venky Shankar (vshankar)
REVIEW: http://review.gluster.org/13032 (features/bit-rot-stub: delete the link for bad object in quarantine directory) posted (#2) for review on release-3.7 by Venky Shankar (vshankar)
REVIEW: http://review.gluster.org/13032 (features/bit-rot-stub: delete the link for bad object in quarantine directory) posted (#3) for review on release-3.7 by Venky Shankar (vshankar)
REVIEW: http://review.gluster.org/13032 (features/bit-rot-stub: delete the link for bad object in quarantine directory) posted (#4) for review on release-3.7 by Venky Shankar (vshankar)
REVIEW: http://review.gluster.org/13032 (features/bit-rot-stub: delete the link for bad object in quarantine directory) posted (#5) for review on release-3.7 by Venky Shankar (vshankar)
REVIEW: http://review.gluster.org/13032 (features/bit-rot-stub: delete the link for bad object in quarantine directory) posted (#6) for review on release-3.7 by Venky Shankar (vshankar)
REVIEW: http://review.gluster.org/13032 (features/bit-rot-stub: delete the link for bad object in quarantine directory) posted (#7) for review on release-3.7 by Venky Shankar (vshankar)
COMMIT: http://review.gluster.org/13032 committed in release-3.7 by Venky Shankar (vshankar) ------ commit 0f251fd517d2ed67330e11d56e9e64cca8390b35 Author: Raghavendra Bhat <raghavendra> Date: Wed Nov 25 15:25:26 2015 +0530 features/bit-rot-stub: delete the link for bad object in quarantine directory When the bad object is deleted (as of now manually from the backend itself), along with its gfid handle, the entry for the bad object in the quarantne directory is left as it is (it also can be removed manually though). But the next lookup of the object upon not finding it in the backend, sends forget on the in-memory inode. If the stale link for the gfid still exists in the quarantine directory, bir-rot-stub will unlink the entry in its forget or in the next failed lookup on that object with errno being ENOENT. Change-Id: If84292d3e44707dfa11fa29023b3d9f691b8f0f3 BUG: 1293584 Signed-off-by: Raghavendra Bhat <raghavendra> Reviewed-on: http://review.gluster.org/12743 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Venky Shankar <vshankar> (cherry picked from commit f853ed9c61bf65cb39f859470a8ffe8973818868) Reviewed-on: http://review.gluster.org/13032
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report. glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user