Description of problem: When user deletes all the files from mount point scrub status output still shows the corrupted objects list and it does not get cleared. Due to this next time when scrubber runs and finds an object as bad it just appends the newly corrupted file to the existing list and wrong information is displayed in the output. Version-Release number of selected component (if applicable): glusterfs-3.7.5-7.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. Create a volume and enable bitrot on the volume. 2. Fuse mount the volume and create some files 3. corrupt some files from the backend and wait for scrubber to mark those files as bad files. 4. Now run gluster volume bitrot <vol_name> scrub status command and user will be able to see all gfids of files which were deleted. 5. Now delete all the files from mount point and create new files. 6. corrput the newly created file and wait for scrubber to run Actual results: 1) Even after files are deleted from the mount point scrub status output still shows the gfids of the files. 2) Next time when scrubber runs list of corrupted files increases to old count + new count. Expected results: 1) Once the files are deleted from the mount point, scrub status should not show the non existing gfid . 2) If the newly created files gets corrupted, corrupted objects count should not appended to the old object count. Additional info: scurb status output before deleting the files: ================================================= [root@rhs-client2 b3]# gluster vol bitrot vol_dis_rep scrub status Volume name : vol_dis_rep State of scrub: Active Scrub impact: lazy Scrub frequency: hourly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: localhost Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f56046c2-1bf1-47f1-b3b4-40c40b116d3b 2883c58a-7b81-47ab-a83c-408fe23e9767 b3bfe63e-44fd-46e9-8e01-cf84af2f4f4e 5ddaf82c-1238-4af7-980a-332da1c3e47a f4151cef-285a-4d76-851e-ffeab7ca78cd c805d027-2a46-4c97-ad13-9ae919164fe6 ========================================================= Node name: 10.70.36.62 Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f47c1450-581b-42d5-ad9b-1955c9d4e7a0 50654920-8894-4a52-8a61-0a2862f012d9 3aa197cf-2c3a-4104-b0ac-0f36b7d86874 4ba25e47-4518-4497-b309-c8dfde6b9a43 bbf6d5b5-d66d-470e-a4ad-c163a4e23dd0 0453fa23-3143-45c5-82c9-803bbc2760ea ========================================================= scrub status output after deleting the files: =================================================== [root@rhs-client2 b3]# gluster vol bitrot vol_dis_rep scrub status Volume name : vol_dis_rep State of scrub: Active Scrub impact: lazy Scrub frequency: hourly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: localhost Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f56046c2-1bf1-47f1-b3b4-40c40b116d3b 2883c58a-7b81-47ab-a83c-408fe23e9767 b3bfe63e-44fd-46e9-8e01-cf84af2f4f4e 5ddaf82c-1238-4af7-980a-332da1c3e47a f4151cef-285a-4d76-851e-ffeab7ca78cd c805d027-2a46-4c97-ad13-9ae919164fe6 ========================================================= Node name: 10.70.36.62 Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f47c1450-581b-42d5-ad9b-1955c9d4e7a0 50654920-8894-4a52-8a61-0a2862f012d9 3aa197cf-2c3a-4104-b0ac-0f36b7d86874 4ba25e47-4518-4497-b309-c8dfde6b9a43 bbf6d5b5-d66d-470e-a4ad-c163a4e23dd0 0453fa23-3143-45c5-82c9-803bbc2760ea ========================================================= scurb status output after deleting old files and creating new files from the mount point: ========================================================== [root@rhs-client2 b3]# gluster vol bitrot vol_dis_rep scrub status Volume name : vol_dis_rep State of scrub: Active Scrub impact: lazy Scrub frequency: hourly Bitrot error log location: /var/log/glusterfs/bitd.log Scrubber error log location: /var/log/glusterfs/scrub.log ========================================================= Node name: localhost Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 7 Corrupted object's: f56046c2-1bf1-47f1-b3b4-40c40b116d3b 2883c58a-7b81-47ab-a83c-408fe23e9767 b3bfe63e-44fd-46e9-8e01-cf84af2f4f4e 5ddaf82c-1238-4af7-980a-332da1c3e47a f4151cef-285a-4d76-851e-ffeab7ca78cd c805d027-2a46-4c97-ad13-9ae919164fe6 e8561c6b-f881-499b-808b-7fa2bce190f7 ========================================================= Node name: 10.70.36.62 Number of Scrubbed files: 0 Number of Unsigned files: 0 Last completed scrub time: 0 Duration of last scrub: 0 Error count: 6 Corrupted object's: f47c1450-581b-42d5-ad9b-1955c9d4e7a0 50654920-8894-4a52-8a61-0a2862f012d9 3aa197cf-2c3a-4104-b0ac-0f36b7d86874 4ba25e47-4518-4497-b309-c8dfde6b9a43 bbf6d5b5-d66d-470e-a4ad-c163a4e23dd0 0453fa23-3143-45c5-82c9-803bbc2760ea =========================================================
Have sent the patch upstream for review. http://review.gluster.org/#/c/12743/
Verified and works fine with build glusterfs-3.7.5-12.el7rhgs.x86_64. Corrupted objects list gets cleared when all the files in the volume are deleted and count does not get incremented as old + new count. Once the bad files are recovered manually, scrub status displays error count and corrupted objects as zero.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html