Description of problem: ======================= When no operation is done on the gluster system, and we just have eventing enabled and a webhook listening on the other side, there are multiple times that we see VOLUME_SET events from every node of the cluster. If it is a 4node cluster, we see 4 such events. {u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'0dea52e0-8c32-4616-8ef8-16db16120eaa'} I suppose it does not harm to see these, as they seem to be inconsequential. Nor have I been able to piece together a sequence of steps which trigger these. Hence raising a low-severity BZ as of now. Will update the BZ when I am able to gather any more information regarding the same. Version-Release number of selected component (if applicable): ============================================================ 3.8.4-2 How reproducible: =================== Seen it 3 times Additional info: ================== [root@dhcp46-242 ~]# gluster peer status Number of Peers: 3 Hostname: dhcp46-239.lab.eng.blr.redhat.com Uuid: ed362eb3-421c-4a25-ad0e-82ef157ea328 State: Peer in Cluster (Connected) Hostname: 10.70.46.240 Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176 State: Peer in Cluster (Connected) Hostname: 10.70.46.218 Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa State: Peer in Cluster (Connected) [root@dhcp46-242 ~]# [root@dhcp46-242 ~]# [root@dhcp46-242 ~]# rpm -qa | grep gluster glusterfs-debuginfo-3.8.4-1.el7rhgs.x86_64 glusterfs-fuse-3.8.4-2.el7rhgs.x86_64 glusterfs-cli-3.8.4-2.el7rhgs.x86_64 glusterfs-events-3.8.4-2.el7rhgs.x86_64 glusterfs-devel-3.8.4-2.el7rhgs.x86_64 glusterfs-api-devel-3.8.4-2.el7rhgs.x86_64 glusterfs-3.8.4-2.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64 python-gluster-3.8.4-2.el7rhgs.noarch glusterfs-ganesha-3.8.4-2.el7rhgs.x86_64 glusterfs-server-3.8.4-2.el7rhgs.x86_64 nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64 glusterfs-libs-3.8.4-2.el7rhgs.x86_64 glusterfs-api-3.8.4-2.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64 glusterfs-rdma-3.8.4-2.el7rhgs.x86_64 [root@dhcp46-242 ~]# [root@dhcp46-159 ~]# python webhook.py * Running on http://10.70.46.159:9000/ (Press CTRL+C to quit) ############################################################################################################## {u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'0dea52e0-8c32-4616-8ef8-16db16120eaa'} 10.70.46.218 - - [15/Oct/2016 05:59:22] "POST /listen HTTP/1.1" 200 - ############################################################################################################## {u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'1e8967ae-51b2-4c27-907e-a22a83107fd0'} 10.70.46.242 - - [15/Oct/2016 05:59:22] "POST /listen HTTP/1.1" 200 - ############################################################################################################## {u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'72c4f894-61f7-433e-a546-4ad2d7f0a176'} 10.70.46.240 - - [15/Oct/2016 05:59:22] "POST /listen HTTP/1.1" 200 - ############################################################################################################## {u'message': {u'name': u'help', u'options': [[u'']]}, u'event': u'VOLUME_SET', u'ts': 1476936891, u'nodeid': u'ed362eb3-421c-4a25-ad0e-82ef157ea328'} 10.70.46.239 - - [15/Oct/2016 05:59:22] "POST /listen HTTP/1.1" 200 -
Looks like `gluster volume set help` is run. {u'name': u'help', .. I will handle the help case explicitly and send patch to not send Event when help is run.
Patch posted in upstream http://review.gluster.org/15685
(update Sweta Anandpara 2016-10-21 04:47:16 EDT on BZ 1387207) Kindly note I had not run any 'gluster volume set help' command. When I explicitly run that command now in my cluster, I do see a similar event being generated. However, the event that I was seeing until now for which the bug was raised was not user-driven. It has to be something that is getting triggered internally. The above note is fyi. Nevertheless, the patch will address the outcome. We are good there.
upstream mainline : http://review.gluster.org/15685 downstream patch : https://code.engineering.redhat.com/gerrit/89418 3.9 patch http://review.gluster.org/#/c/15714/ is posted, but this may get merged post 3.9.0 release as the merge window is blocked.
Tested and verified this on the build 3.8.4-5 I have not seen any random 'VOLUME_SET' events during my 2 days of testing. Even when I explicitly execute a command 'gluster volume set help' I do not see an event getting generated. Moving this BZ to verified in 3.2. Detailed logs are pasted below: [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# rpm -qa | grep gluster nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64 glusterfs-api-3.8.4-5.el7rhgs.x86_64 python-gluster-3.8.4-5.el7rhgs.noarch glusterfs-client-xlators-3.8.4-5.el7rhgs.x86_64 glusterfs-server-3.8.4-5.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-devel-3.8.4-5.el7rhgs.x86_64 gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64 glusterfs-libs-3.8.4-5.el7rhgs.x86_64 glusterfs-fuse-3.8.4-5.el7rhgs.x86_64 glusterfs-api-devel-3.8.4-5.el7rhgs.x86_64 glusterfs-rdma-3.8.4-5.el7rhgs.x86_64 glusterfs-3.8.4-5.el7rhgs.x86_64 glusterfs-cli-3.8.4-5.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-5.el7rhgs.x86_64 glusterfs-debuginfo-3.8.4-4.el7rhgs.x86_64 glusterfs-events-3.8.4-5.el7rhgs.x86_64 [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# gluster peer status Number of Peers: 3 Hostname: 10.70.46.240 Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176 State: Peer in Cluster (Connected) Hostname: 10.70.46.242 Uuid: 1e8967ae-51b2-4c27-907e-a22a83107fd0 State: Peer in Cluster (Connected) Hostname: 10.70.46.218 Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa State: Peer in Cluster (Connected) [root@dhcp46-239 ~]# gluster v info Volume Name: ozone Type: Disperse Volume ID: 376cdde0-194f-460a-b273-3904a704a7dd Status: Started Snapshot Count: 0 Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: 10.70.46.239:/bricks/brick0/ozone0 Brick2: 10.70.46.240:/bricks/brick0/ozone2 Brick3: 10.70.46.242:/bricks/brick0/ozone2 Brick4: 10.70.46.239:/bricks/brick1/ozone3 Brick5: 10.70.46.240:/bricks/brick1/ozone4 Brick6: 10.70.46.242:/bricks/brick1/ozone5 Options Reconfigured: features.expiry-time: 5 features.scrub-freq: daily features.scrub: Active features.bitrot: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on cluster.enable-shared-storage: disable Volume Name: rep3 Type: Distributed-Replicate Volume ID: f29c2892-ac61-4b26-a146-8fa34038eb59 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.70.46.240:/bricks/brick0/rep30 Brick2: 10.70.46.242:/bricks/brick0/rep31 Brick3: 10.70.46.218:/bricks/brick0/rep32 Brick4: 10.70.46.240:/bricks/brick1/rep33 Brick5: 10.70.46.242:/bricks/brick1/rep34 Brick6: 10.70.46.218:/bricks/brick1/rep35 Brick7: 10.70.46.240:/bricks/brick2/rep36 Brick8: 10.70.46.242:/bricks/brick2/rep37 Brick9: 10.70.46.218:/bricks/brick2/rep38 Options Reconfigured: features.expiry-time: 5 features.scrub-freq: hourly features.scrub: Active features.bitrot: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on cluster.enable-shared-storage: disable [root@dhcp46-239 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html