Description of problem ====================== When you start ondemand scrub process, gluster cli tool reports sheer "volume bitrot: success". This output could be improved to point out that the scrub operation has been successfuly started. Version-Release =============== glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-events-3.8.4-52.el7rhgs.x86_64 gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64 python-gluster-3.8.4-52.el7rhgs.noarch glusterfs-api-3.8.4-52.el7rhgs.x86_64 glusterfs-server-3.8.4-52.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64 glusterfs-libs-3.8.4-52.el7rhgs.x86_64 glusterfs-3.8.4-52.el7rhgs.x86_64 glusterfs-fuse-3.8.4-52.el7rhgs.x86_64 glusterfs-rdma-3.8.4-52.el7rhgs.x86_64 vdsm-gluster-4.17.33-1.2.el7rhgs.noarch glusterfs-cli-3.8.4-52.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64 How reproducible ================ 100 % Steps to Reproduce ================== 1. Create GlusterFS volume 2. Enable scrub on the volume 3. Load some data there 4. Run ondeman scrub Actual results ============== As you can see here: ``` # gluster volume bitrot volume_beta_arbiter_2_plus_1x2 scrub ondemand volume bitrot: success ``` the output is very short and could be misinterpreted as a status of scrub operation. It doesn't directly mentions start of scrub operation. Expected results ================ The output of `gluster volume bitrot VOLNAME scrub ondemand` command should state that scrub ondemand operation has been started. See for example how btrfs scrub start looks like: ``` # btrfs scrub start /mnt/btrfs/ scrub started on /mnt/btrfs/, fsid 6264de6e-d2c3-494a-aae0-2e3ec6f14c87 (pid=29794) ```
Or compare that with output of heal command: ``` # gluster volume heal volume_alpha_distrep_6x2 full Launching heal operation to perform full self heal on volume volume_alpha_distrep_6x2 has been successful Use heal info commands to check status # ```
Upstream patch : https://review.gluster.org/19344.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607