Bug 1517463

Summary: [bitrot] scrub ondemand reports it's start as success without additional detail
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Martin Bukatovic <mbukatov>
Component: bitrotAssignee: Sunny Kumar <sunkumar>
Status: CLOSED ERRATA QA Contact: Sweta Anandpara <sanandpa>
Severity: low Docs Contact:
Priority: low    
Version: rhgs-3.3CC: amukherj, khiremat, mkudlej, nchilaka, rhinduja, rhs-bugs, srmukher, storage-qa-internal, sunkumar
Target Milestone: ---   
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.12.2-3 Doc Type: Enhancement
Doc Text:
When scrub-throttle or scrub-frequency is set, the output for all successful BitRot operation (enable, disable or scrub options) would appear as volume bitrot: scrub-frequency is set to <FREQUENCY> successfully for volume <VOLUME_NAME>
Story Points: ---
Clone Of:
: 1539166 (view as bug list) Environment:
Last Closed: 2018-09-04 06:39:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1503137, 1539166    

Description Martin Bukatovic 2017-11-25 16:57:24 UTC
Description of problem
======================

When you start ondemand scrub process, gluster cli tool reports sheer
"volume bitrot: success". This output could be improved to point out
that the scrub operation has been successfuly started.

Version-Release
===============

glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-events-3.8.4-52.el7rhgs.x86_64
gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64
python-gluster-3.8.4-52.el7rhgs.noarch
glusterfs-api-3.8.4-52.el7rhgs.x86_64
glusterfs-server-3.8.4-52.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
glusterfs-libs-3.8.4-52.el7rhgs.x86_64
glusterfs-3.8.4-52.el7rhgs.x86_64
glusterfs-fuse-3.8.4-52.el7rhgs.x86_64
glusterfs-rdma-3.8.4-52.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch
glusterfs-cli-3.8.4-52.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64

How reproducible
================

100 %

Steps to Reproduce
==================

1. Create GlusterFS volume
2. Enable scrub on the volume
3. Load some data there
4. Run ondeman scrub

Actual results
==============

As you can see here:

```
# gluster volume bitrot volume_beta_arbiter_2_plus_1x2 scrub ondemand
volume bitrot: success
```

the output is very short and could be misinterpreted as a status of scrub operation.
It doesn't directly mentions start of scrub operation.

Expected results
================

The output of `gluster volume bitrot VOLNAME scrub ondemand` command should state
that scrub ondemand operation has been started.

See for example how btrfs scrub start looks like:

```
# btrfs scrub start /mnt/btrfs/
scrub started on /mnt/btrfs/, fsid 6264de6e-d2c3-494a-aae0-2e3ec6f14c87 (pid=29794)
```

Comment 2 Martin Bukatovic 2017-11-26 10:26:32 UTC
Or compare that with output of heal command:

```
# gluster volume heal volume_alpha_distrep_6x2 full
Launching heal operation to perform full self heal on volume volume_alpha_distrep_6x2 has been successful 
Use heal info commands to check status
# 
```

Comment 3 Sunny Kumar 2018-01-26 20:00:14 UTC
Upstream patch : https://review.gluster.org/19344.

Comment 10 errata-xmlrpc 2018-09-04 06:39:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607