Bug 1387152
Summary: | [Eventing]: Random VOLUME_SET events seen when no operation is done on the gluster cluster | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sweta Anandpara <sanandpa> | |
Component: | eventsapi | Assignee: | Aravinda VK <avishwan> | |
Status: | CLOSED ERRATA | QA Contact: | Sweta Anandpara <sanandpa> | |
Severity: | low | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.2 | CC: | amukherj, rhinduja | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.2.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-4 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1387207 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-23 06:13:02 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1387207, 1388070 | |||
Bug Blocks: | 1351528 |
Description
Sweta Anandpara
2016-10-20 09:19:51 UTC
Looks like `gluster volume set help` is run. {u'name': u'help', .. I will handle the help case explicitly and send patch to not send Event when help is run. Patch posted in upstream http://review.gluster.org/15685 (update Sweta Anandpara 2016-10-21 04:47:16 EDT on BZ 1387207) Kindly note I had not run any 'gluster volume set help' command. When I explicitly run that command now in my cluster, I do see a similar event being generated. However, the event that I was seeing until now for which the bug was raised was not user-driven. It has to be something that is getting triggered internally. The above note is fyi. Nevertheless, the patch will address the outcome. We are good there. upstream mainline : http://review.gluster.org/15685 downstream patch : https://code.engineering.redhat.com/gerrit/89418 3.9 patch http://review.gluster.org/#/c/15714/ is posted, but this may get merged post 3.9.0 release as the merge window is blocked. Tested and verified this on the build 3.8.4-5 I have not seen any random 'VOLUME_SET' events during my 2 days of testing. Even when I explicitly execute a command 'gluster volume set help' I do not see an event getting generated. Moving this BZ to verified in 3.2. Detailed logs are pasted below: [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# rpm -qa | grep gluster nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64 glusterfs-api-3.8.4-5.el7rhgs.x86_64 python-gluster-3.8.4-5.el7rhgs.noarch glusterfs-client-xlators-3.8.4-5.el7rhgs.x86_64 glusterfs-server-3.8.4-5.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-devel-3.8.4-5.el7rhgs.x86_64 gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64 glusterfs-libs-3.8.4-5.el7rhgs.x86_64 glusterfs-fuse-3.8.4-5.el7rhgs.x86_64 glusterfs-api-devel-3.8.4-5.el7rhgs.x86_64 glusterfs-rdma-3.8.4-5.el7rhgs.x86_64 glusterfs-3.8.4-5.el7rhgs.x86_64 glusterfs-cli-3.8.4-5.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-5.el7rhgs.x86_64 glusterfs-debuginfo-3.8.4-4.el7rhgs.x86_64 glusterfs-events-3.8.4-5.el7rhgs.x86_64 [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# gluster peer status Number of Peers: 3 Hostname: 10.70.46.240 Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176 State: Peer in Cluster (Connected) Hostname: 10.70.46.242 Uuid: 1e8967ae-51b2-4c27-907e-a22a83107fd0 State: Peer in Cluster (Connected) Hostname: 10.70.46.218 Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa State: Peer in Cluster (Connected) [root@dhcp46-239 ~]# gluster v info Volume Name: ozone Type: Disperse Volume ID: 376cdde0-194f-460a-b273-3904a704a7dd Status: Started Snapshot Count: 0 Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: 10.70.46.239:/bricks/brick0/ozone0 Brick2: 10.70.46.240:/bricks/brick0/ozone2 Brick3: 10.70.46.242:/bricks/brick0/ozone2 Brick4: 10.70.46.239:/bricks/brick1/ozone3 Brick5: 10.70.46.240:/bricks/brick1/ozone4 Brick6: 10.70.46.242:/bricks/brick1/ozone5 Options Reconfigured: features.expiry-time: 5 features.scrub-freq: daily features.scrub: Active features.bitrot: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on cluster.enable-shared-storage: disable Volume Name: rep3 Type: Distributed-Replicate Volume ID: f29c2892-ac61-4b26-a146-8fa34038eb59 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.70.46.240:/bricks/brick0/rep30 Brick2: 10.70.46.242:/bricks/brick0/rep31 Brick3: 10.70.46.218:/bricks/brick0/rep32 Brick4: 10.70.46.240:/bricks/brick1/rep33 Brick5: 10.70.46.242:/bricks/brick1/rep34 Brick6: 10.70.46.218:/bricks/brick1/rep35 Brick7: 10.70.46.240:/bricks/brick2/rep36 Brick8: 10.70.46.242:/bricks/brick2/rep37 Brick9: 10.70.46.218:/bricks/brick2/rep38 Options Reconfigured: features.expiry-time: 5 features.scrub-freq: hourly features.scrub: Active features.bitrot: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on cluster.enable-shared-storage: disable [root@dhcp46-239 ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |