Bug 1517463 - [bitrot] scrub ondemand reports it's start as success without additional detail
Summary: [bitrot] scrub ondemand reports it's start as success without additional detail
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: bitrot
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: RHGS 3.4.0
Assignee: Sunny Kumar
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1503137 1539166
TreeView+ depends on / blocked
 
Reported: 2017-11-25 16:57 UTC by Martin Bukatovic
Modified: 2018-12-13 06:44 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.12.2-3
Doc Type: Enhancement
Doc Text:
When scrub-throttle or scrub-frequency is set, the output for all successful BitRot operation (enable, disable or scrub options) would appear as volume bitrot: scrub-frequency is set to <FREQUENCY> successfully for volume <VOLUME_NAME>
Clone Of:
: 1539166 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:39:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1516484 0 unspecified CLOSED [bitrot] scrub doesn't catch file manually changed on one of bricks for disperse or arbiter volumes 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:41:33 UTC

Internal Links: 1516484

Description Martin Bukatovic 2017-11-25 16:57:24 UTC
Description of problem
======================

When you start ondemand scrub process, gluster cli tool reports sheer
"volume bitrot: success". This output could be improved to point out
that the scrub operation has been successfuly started.

Version-Release
===============

glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-events-3.8.4-52.el7rhgs.x86_64
gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64
python-gluster-3.8.4-52.el7rhgs.noarch
glusterfs-api-3.8.4-52.el7rhgs.x86_64
glusterfs-server-3.8.4-52.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
glusterfs-libs-3.8.4-52.el7rhgs.x86_64
glusterfs-3.8.4-52.el7rhgs.x86_64
glusterfs-fuse-3.8.4-52.el7rhgs.x86_64
glusterfs-rdma-3.8.4-52.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch
glusterfs-cli-3.8.4-52.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64

How reproducible
================

100 %

Steps to Reproduce
==================

1. Create GlusterFS volume
2. Enable scrub on the volume
3. Load some data there
4. Run ondeman scrub

Actual results
==============

As you can see here:

```
# gluster volume bitrot volume_beta_arbiter_2_plus_1x2 scrub ondemand
volume bitrot: success
```

the output is very short and could be misinterpreted as a status of scrub operation.
It doesn't directly mentions start of scrub operation.

Expected results
================

The output of `gluster volume bitrot VOLNAME scrub ondemand` command should state
that scrub ondemand operation has been started.

See for example how btrfs scrub start looks like:

```
# btrfs scrub start /mnt/btrfs/
scrub started on /mnt/btrfs/, fsid 6264de6e-d2c3-494a-aae0-2e3ec6f14c87 (pid=29794)
```

Comment 2 Martin Bukatovic 2017-11-26 10:26:32 UTC
Or compare that with output of heal command:

```
# gluster volume heal volume_alpha_distrep_6x2 full
Launching heal operation to perform full self heal on volume volume_alpha_distrep_6x2 has been successful 
Use heal info commands to check status
# 
```

Comment 3 Sunny Kumar 2018-01-26 20:00:14 UTC
Upstream patch : https://review.gluster.org/19344.

Comment 10 errata-xmlrpc 2018-09-04 06:39:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.