Bug 1359588 - [Bitrot - RFE]: On demand scrubbing option to scrub
Summary: [Bitrot - RFE]: On demand scrubbing option to scrub
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: bitrot
Version: rhgs-3.1
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: RHGS 3.2.0
Assignee: Kotresh HR
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1351503 1366195
TreeView+ depends on / blocked
 
Reported: 2016-07-25 05:30 UTC by Kotresh HR
Modified: 2017-03-23 05:39 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1366195 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:39:58 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC
Red Hat Bugzilla 1365755 None None None Never

Internal Links: 1365755

Description Kotresh HR 2016-07-25 05:30:55 UTC
Description of problem:

Currently scrub-frequency value for scrubber is {hourly|daily|weekly|biweekly|monthly}. So scrubber need to wait for that frequency value before starting scrubbing. This patch implements on demand scrubbing. If user set scrub value on-demand then scrubber do not need to wait for scrubbing frequency, Scrubber will start crawling of filesystem immediately. Command for on-demand scrubbing will be. 

#gluster volume bitrot <VOLNAME> scrub ondemand

NOTE: For testing purpose, scrub-frequency of 'minute' [1] is supported and is not exposed to consumers. Having a ondemand scrub option is more clean.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1351537

Comment 2 Atin Mukherjee 2016-08-09 05:06:38 UTC
upstream mainline patch http://review.gluster.org/15111 posted for review.

Comment 4 Kotresh HR 2016-09-15 14:54:47 UTC
Downstream Patch:
https://code.engineering.redhat.com/gerrit/84665/

Comment 5 Atin Mukherjee 2016-09-16 05:12:13 UTC
downstream patch is merged now.

Comment 8 Sweta Anandpara 2016-10-25 05:50:56 UTC
Tested and verified this on the build 3.8.4-2

On a bitrot enabled volume, it is possible to trigger the scrub process on demand. The CLI does reflect the same. When the scrub ondemand command is executed, it immediately triggers off a scrubber. If a scrub process is in the middle of its run, it fails with an appropriate error, BUT not all the time. At times it resets the scrub values and restarts the run (for which BZ 1388298 is raised)

Moving this BZ to verified in 3.2

[root@dhcp46-242 mnt]# gluster peer status
Number of Peers: 3

Hostname: dhcp46-239.lab.eng.blr.redhat.com
Uuid: ed362eb3-421c-4a25-ad0e-82ef157ea328
State: Peer in Cluster (Connected)

Hostname: 10.70.46.240
Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176
State: Peer in Cluster (Connected)

Hostname: 10.70.46.218
Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa
State: Peer in Cluster (Connected)
[root@dhcp46-242 mnt]# 
[root@dhcp46-242 mnt]# 
[root@dhcp46-242 mnt]# rpm -qa | grep gluster
glusterfs-debuginfo-3.8.4-1.el7rhgs.x86_64
glusterfs-fuse-3.8.4-2.el7rhgs.x86_64
glusterfs-cli-3.8.4-2.el7rhgs.x86_64
glusterfs-events-3.8.4-2.el7rhgs.x86_64
glusterfs-devel-3.8.4-2.el7rhgs.x86_64
glusterfs-api-devel-3.8.4-2.el7rhgs.x86_64
glusterfs-3.8.4-2.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64
python-gluster-3.8.4-2.el7rhgs.noarch
glusterfs-ganesha-3.8.4-2.el7rhgs.x86_64
glusterfs-server-3.8.4-2.el7rhgs.x86_64
nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64
glusterfs-libs-3.8.4-2.el7rhgs.x86_64
glusterfs-api-3.8.4-2.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64
glusterfs-rdma-3.8.4-2.el7rhgs.x86_64
[root@dhcp46-242 mnt]# 
[root@dhcp46-242 mnt]# 
[root@dhcp46-242 mnt]# gluster v bitrot 
Usage: volume bitrot <VOLNAME> {enable|disable} |
volume bitrot <volname> scrub-throttle {lazy|normal|aggressive} |
volume bitrot <volname> scrub-frequency {hourly|daily|weekly|biweekly|monthly} |
volume bitrot <volname> scrub {pause|resume|status|ondemand}
[root@dhcp46-242 mnt]#

Comment 10 errata-xmlrpc 2017-03-23 05:39:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.