Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1517463 - [bitrot] scrub ondemand reports it's start as success without additional detail [NEEDINFO]
[bitrot] scrub ondemand reports it's start as success without additional detail
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: bitrot (Show other bugs)
3.3
Unspecified Unspecified
low Severity low
: ---
: RHGS 3.4.0
Assigned To: Sunny Kumar
Sweta Anandpara
:
Depends On:
Blocks: 1503137 1539166
  Show dependency treegraph
 
Reported: 2017-11-25 11:57 EST by Martin Bukatovic
Modified: 2018-09-04 02:41 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-3
Doc Type: Enhancement
Doc Text:
When scrub-throttle or scrub-frequency is set, the output for all successful BitRot operation (enable, disable or scrub options) would appear as volume bitrot: scrub-frequency is set to <FREQUENCY> successfully for volume <VOLUME_NAME>
Story Points: ---
Clone Of:
: 1539166 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:39:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
srmukher: needinfo? (sunkumar)


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:41 EDT

  None (edit)
Description Martin Bukatovic 2017-11-25 11:57:24 EST
Description of problem
======================

When you start ondemand scrub process, gluster cli tool reports sheer
"volume bitrot: success". This output could be improved to point out
that the scrub operation has been successfuly started.

Version-Release
===============

glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-events-3.8.4-52.el7rhgs.x86_64
gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64
python-gluster-3.8.4-52.el7rhgs.noarch
glusterfs-api-3.8.4-52.el7rhgs.x86_64
glusterfs-server-3.8.4-52.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
glusterfs-libs-3.8.4-52.el7rhgs.x86_64
glusterfs-3.8.4-52.el7rhgs.x86_64
glusterfs-fuse-3.8.4-52.el7rhgs.x86_64
glusterfs-rdma-3.8.4-52.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch
glusterfs-cli-3.8.4-52.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64

How reproducible
================

100 %

Steps to Reproduce
==================

1. Create GlusterFS volume
2. Enable scrub on the volume
3. Load some data there
4. Run ondeman scrub

Actual results
==============

As you can see here:

```
# gluster volume bitrot volume_beta_arbiter_2_plus_1x2 scrub ondemand
volume bitrot: success
```

the output is very short and could be misinterpreted as a status of scrub operation.
It doesn't directly mentions start of scrub operation.

Expected results
================

The output of `gluster volume bitrot VOLNAME scrub ondemand` command should state
that scrub ondemand operation has been started.

See for example how btrfs scrub start looks like:

```
# btrfs scrub start /mnt/btrfs/
scrub started on /mnt/btrfs/, fsid 6264de6e-d2c3-494a-aae0-2e3ec6f14c87 (pid=29794)
```
Comment 2 Martin Bukatovic 2017-11-26 05:26:32 EST
Or compare that with output of heal command:

```
# gluster volume heal volume_alpha_distrep_6x2 full
Launching heal operation to perform full self heal on volume volume_alpha_distrep_6x2 has been successful 
Use heal info commands to check status
# 
```
Comment 3 Sunny Kumar 2018-01-26 15:00:14 EST
Upstream patch : https://review.gluster.org/19344.
Comment 10 errata-xmlrpc 2018-09-04 02:39:49 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.