Bug 1387152 - [Eventing]: Random VOLUME_SET events seen when no operation is done on the gluster cluster
Summary: [Eventing]: Random VOLUME_SET events seen when no operation is done on the gl...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: eventsapi
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: RHGS 3.2.0
Assignee: Aravinda VK
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On: 1387207 1388070
Blocks: 1351528
TreeView+ depends on / blocked
 
Reported: 2016-10-20 09:19 UTC by Sweta Anandpara
Modified: 2017-03-23 06:13 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.8.4-4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1387207 (view as bug list)
Environment:
Last Closed: 2017-03-23 06:13:02 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Sweta Anandpara 2016-10-20 09:19:51 UTC
Description of problem:
=======================
When no operation is done on the gluster system, and we just have eventing enabled and a webhook listening on the other side, there are multiple times that we see VOLUME_SET events from every node of the cluster. If it is a 4node cluster, we see 4 such events.
{u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'0dea52e0-8c32-4616-8ef8-16db16120eaa'}


I suppose it does not harm to see these, as they seem to be inconsequential. Nor have I been able to piece together a sequence of steps which trigger these. Hence raising a low-severity BZ as of now. Will update the BZ when I am able to gather any more information regarding the same.


Version-Release number of selected component (if applicable):
============================================================
3.8.4-2


How reproducible:
===================
Seen it 3 times


Additional info:
==================

[root@dhcp46-242 ~]# gluster peer status
Number of Peers: 3

Hostname: dhcp46-239.lab.eng.blr.redhat.com
Uuid: ed362eb3-421c-4a25-ad0e-82ef157ea328
State: Peer in Cluster (Connected)

Hostname: 10.70.46.240
Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176
State: Peer in Cluster (Connected)

Hostname: 10.70.46.218
Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa
State: Peer in Cluster (Connected)
[root@dhcp46-242 ~]# 
[root@dhcp46-242 ~]# 
[root@dhcp46-242 ~]# rpm -qa | grep gluster
glusterfs-debuginfo-3.8.4-1.el7rhgs.x86_64
glusterfs-fuse-3.8.4-2.el7rhgs.x86_64
glusterfs-cli-3.8.4-2.el7rhgs.x86_64
glusterfs-events-3.8.4-2.el7rhgs.x86_64
glusterfs-devel-3.8.4-2.el7rhgs.x86_64
glusterfs-api-devel-3.8.4-2.el7rhgs.x86_64
glusterfs-3.8.4-2.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64
python-gluster-3.8.4-2.el7rhgs.noarch
glusterfs-ganesha-3.8.4-2.el7rhgs.x86_64
glusterfs-server-3.8.4-2.el7rhgs.x86_64
nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64
glusterfs-libs-3.8.4-2.el7rhgs.x86_64
glusterfs-api-3.8.4-2.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64
glusterfs-rdma-3.8.4-2.el7rhgs.x86_64
[root@dhcp46-242 ~]# 


[root@dhcp46-159 ~]# python webhook.py 
 * Running on http://10.70.46.159:9000/ (Press CTRL+C to quit)

##############################################################################################################
{u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'0dea52e0-8c32-4616-8ef8-16db16120eaa'}
10.70.46.218 - - [15/Oct/2016 05:59:22] "POST /listen HTTP/1.1" 200 -
##############################################################################################################
{u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'1e8967ae-51b2-4c27-907e-a22a83107fd0'}
10.70.46.242 - - [15/Oct/2016 05:59:22] "POST /listen HTTP/1.1" 200 -
##############################################################################################################
{u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'72c4f894-61f7-433e-a546-4ad2d7f0a176'}
10.70.46.240 - - [15/Oct/2016 05:59:22] "POST /listen HTTP/1.1" 200 -
##############################################################################################################
{u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'ed362eb3-421c-4a25-ad0e-82ef157ea328'}
10.70.46.239 - - [15/Oct/2016 05:59:22] "POST /listen HTTP/1.1" 200 -

Comment 2 Aravinda VK 2016-10-20 09:46:45 UTC
Looks like `gluster volume set help` is run. {u'name': u'help', ..

I will handle the help case explicitly and send patch to not send Event when help is run.

Comment 3 Aravinda VK 2016-10-20 11:03:58 UTC
Patch posted in upstream
http://review.gluster.org/15685

Comment 5 Sweta Anandpara 2016-10-24 11:35:54 UTC
(update  Sweta Anandpara 2016-10-21 04:47:16 EDT on BZ 1387207)

Kindly note I had not run any 'gluster volume set help' command. When I explicitly run that command now in my cluster, I do see a similar event being generated. However, the event that I was seeing until now for which the bug was raised was not user-driven. It has to be something that is getting triggered internally. 
The above note is fyi.

Nevertheless, the patch will address the outcome. We are good there.

Comment 7 Atin Mukherjee 2016-11-08 12:11:22 UTC
upstream mainline : http://review.gluster.org/15685
downstream patch : https://code.engineering.redhat.com/gerrit/89418

3.9 patch http://review.gluster.org/#/c/15714/ is posted, but this may get merged post 3.9.0 release as the merge window is blocked.

Comment 9 Sweta Anandpara 2016-11-16 06:03:01 UTC
Tested and verified this on the build 3.8.4-5

I have not seen any random 'VOLUME_SET' events during my 2 days of testing. Even when I explicitly execute a command 'gluster volume set help' I do not see an event getting generated. 

Moving this BZ to verified in 3.2. Detailed logs are pasted below:

[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# rpm -qa | grep gluster
nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64
glusterfs-api-3.8.4-5.el7rhgs.x86_64
python-gluster-3.8.4-5.el7rhgs.noarch
glusterfs-client-xlators-3.8.4-5.el7rhgs.x86_64
glusterfs-server-3.8.4-5.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-devel-3.8.4-5.el7rhgs.x86_64
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
glusterfs-libs-3.8.4-5.el7rhgs.x86_64
glusterfs-fuse-3.8.4-5.el7rhgs.x86_64
glusterfs-api-devel-3.8.4-5.el7rhgs.x86_64
glusterfs-rdma-3.8.4-5.el7rhgs.x86_64
glusterfs-3.8.4-5.el7rhgs.x86_64
glusterfs-cli-3.8.4-5.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-5.el7rhgs.x86_64
glusterfs-debuginfo-3.8.4-4.el7rhgs.x86_64
glusterfs-events-3.8.4-5.el7rhgs.x86_64
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster peer status
Number of Peers: 3

Hostname: 10.70.46.240
Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176
State: Peer in Cluster (Connected)

Hostname: 10.70.46.242
Uuid: 1e8967ae-51b2-4c27-907e-a22a83107fd0
State: Peer in Cluster (Connected)

Hostname: 10.70.46.218
Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa
State: Peer in Cluster (Connected)
[root@dhcp46-239 ~]# gluster v info
 
Volume Name: ozone
Type: Disperse
Volume ID: 376cdde0-194f-460a-b273-3904a704a7dd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.46.239:/bricks/brick0/ozone0
Brick2: 10.70.46.240:/bricks/brick0/ozone2
Brick3: 10.70.46.242:/bricks/brick0/ozone2
Brick4: 10.70.46.239:/bricks/brick1/ozone3
Brick5: 10.70.46.240:/bricks/brick1/ozone4
Brick6: 10.70.46.242:/bricks/brick1/ozone5
Options Reconfigured:
features.expiry-time: 5
features.scrub-freq: daily
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
cluster.enable-shared-storage: disable
 
Volume Name: rep3
Type: Distributed-Replicate
Volume ID: f29c2892-ac61-4b26-a146-8fa34038eb59
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.70.46.240:/bricks/brick0/rep30
Brick2: 10.70.46.242:/bricks/brick0/rep31
Brick3: 10.70.46.218:/bricks/brick0/rep32
Brick4: 10.70.46.240:/bricks/brick1/rep33
Brick5: 10.70.46.242:/bricks/brick1/rep34
Brick6: 10.70.46.218:/bricks/brick1/rep35
Brick7: 10.70.46.240:/bricks/brick2/rep36
Brick8: 10.70.46.242:/bricks/brick2/rep37
Brick9: 10.70.46.218:/bricks/brick2/rep38
Options Reconfigured:
features.expiry-time: 5
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
cluster.enable-shared-storage: disable
[root@dhcp46-239 ~]#

Comment 11 errata-xmlrpc 2017-03-23 06:13:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.