Bug 1387864

Summary: [Eventing]: 'gluster vol bitrot <volname> scrub ondemand' does not produce an event
Product: [Community] GlusterFS Reporter: Kotresh HR <khiremat>
Component: bitrotAssignee: Kotresh HR <khiremat>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact: bugs <bugs>
Priority: unspecified    
Version: mainlineCC: amukherj, bugs, sanandpa, storage-qa-internal, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.10.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1384311
: 1387964 (view as bug list) Environment:
Last Closed: 2017-03-06 17:30:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1384311    
Bug Blocks: 1387964    

Description Kotresh HR 2016-10-22 18:28:10 UTC
+++ This bug was initially created as a clone of Bug #1384311 +++

Description of problem:
========================
When we pause/resume the bitrot scrubber, it generates an event BITROT_SCRUB_OPTION, as there is a change in the scrub option. Similarly when we change the frequency to hourly/daily/weekly, it again generates an event BITROT_SCRUB_FREQ, as there is a change in the scrub frequency. Likewise, if we trigger the scrubber on demand, it necessitates a change in the system, thus producing an event. However, no event is generated/seen when the scrubber is triggered ondemand.


Version-Release number of selected component (if applicable):
============================================================


How reproducible:
=================
Always


Steps to Reproduce:
====================
1. Have a 4 node cluster, with a plain distribute volume. Enable eventing, and register a webhook, as a listener.
2. Create a volume 'vol1' and start it. The corresponding events are seen, as expected
3. Enable bitrot, change the scrub frequency and also try to pause/resume. Watch out for the related events BITROT_ENABLE, BITROT_SCRUB_FREQ, BITROT_SCRUB_OPTION. 
4. Trigger the scrubber on demand, and monitor if the event BITROT_SCRUB_OPTION is seen

Actual results:
==============
All expected events seen from step1 to 3. No event seen at step4

Expected results:
================
An event is expected at step4

Comment 1 Worker Ant 2016-10-22 18:30:42 UTC
REVIEW: http://review.gluster.org/15700 (bitrot/cli: Add ondemand scrub event) posted (#1) for review on master by Kotresh HR (khiremat)

Comment 2 Worker Ant 2016-10-23 06:39:46 UTC
COMMIT: http://review.gluster.org/15700 committed in master by Atin Mukherjee (amukherj) 
------
commit 255cc64375abe2925c7da1e13e45018dad4462df
Author: Kotresh HR <khiremat>
Date:   Sat Oct 22 23:50:02 2016 +0530

    bitrot/cli: Add ondemand scrub event
    
    Following Bitrot Events are added
    
    BITROT_SCRUB_ONDEMAND
    {
         "nodeid": NODEID,
         "ts": TIMESTAMP,
         "event": EVENT_TYPE,
         "message": {
            "name": VOLUME_NAME,
         }
    }
    
    Change-Id: I85e668e254e6f29c447ddb4ad2ce2fc04f98bf3c
    BUG: 1387864
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/15700
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 3 Shyamsundar 2017-03-06 17:30:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/