Description of problem: Currently scrub-frequency value for scrubber is {hourly|daily|weekly|biweekly|monthly}. So scrubber need to wait for that frequency value before starting scrubbing. This patch implements on demand scrubbing. If user set scrub value on-demand then scrubber do not need to wait for scrubbing frequency, Scrubber will start crawling of filesystem immediately. Command for on-demand scrubbing will be. #gluster volume bitrot <VOLNAME> scrub ondemand NOTE: For testing purpose, scrub-frequency of 'minute' [1] is supported and is not exposed to consumers. Having a ondemand scrub option is more clean. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1351537
upstream mainline patch http://review.gluster.org/15111 posted for review.
Downstream Patch: https://code.engineering.redhat.com/gerrit/84665/
downstream patch is merged now.
Tested and verified this on the build 3.8.4-2 On a bitrot enabled volume, it is possible to trigger the scrub process on demand. The CLI does reflect the same. When the scrub ondemand command is executed, it immediately triggers off a scrubber. If a scrub process is in the middle of its run, it fails with an appropriate error, BUT not all the time. At times it resets the scrub values and restarts the run (for which BZ 1388298 is raised) Moving this BZ to verified in 3.2 [root@dhcp46-242 mnt]# gluster peer status Number of Peers: 3 Hostname: dhcp46-239.lab.eng.blr.redhat.com Uuid: ed362eb3-421c-4a25-ad0e-82ef157ea328 State: Peer in Cluster (Connected) Hostname: 10.70.46.240 Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176 State: Peer in Cluster (Connected) Hostname: 10.70.46.218 Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa State: Peer in Cluster (Connected) [root@dhcp46-242 mnt]# [root@dhcp46-242 mnt]# [root@dhcp46-242 mnt]# rpm -qa | grep gluster glusterfs-debuginfo-3.8.4-1.el7rhgs.x86_64 glusterfs-fuse-3.8.4-2.el7rhgs.x86_64 glusterfs-cli-3.8.4-2.el7rhgs.x86_64 glusterfs-events-3.8.4-2.el7rhgs.x86_64 glusterfs-devel-3.8.4-2.el7rhgs.x86_64 glusterfs-api-devel-3.8.4-2.el7rhgs.x86_64 glusterfs-3.8.4-2.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64 python-gluster-3.8.4-2.el7rhgs.noarch glusterfs-ganesha-3.8.4-2.el7rhgs.x86_64 glusterfs-server-3.8.4-2.el7rhgs.x86_64 nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64 glusterfs-libs-3.8.4-2.el7rhgs.x86_64 glusterfs-api-3.8.4-2.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64 glusterfs-rdma-3.8.4-2.el7rhgs.x86_64 [root@dhcp46-242 mnt]# [root@dhcp46-242 mnt]# [root@dhcp46-242 mnt]# gluster v bitrot Usage: volume bitrot <VOLNAME> {enable|disable} | volume bitrot <volname> scrub-throttle {lazy|normal|aggressive} | volume bitrot <volname> scrub-frequency {hourly|daily|weekly|biweekly|monthly} | volume bitrot <volname> scrub {pause|resume|status|ondemand} [root@dhcp46-242 mnt]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html