Description of problem: ======================= gluster-eventsapi command while adding a webhook displayed the below trace: Traceback (most recent call last): File "/usr/sbin/gluster-eventsapi", line 459, in <module> runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 232, in run sync_to_peers() File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers out = execute_in_peers("node-reload") File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers raise GlusterCmdException((rc, out, err, " ".join(cmd))) gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload') However, webhook-add operation had succeeded (as seen in the last line of traceback). Hence, the priority 'medium' Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.8.4-1 How reproducible: ================= 1:1 Steps to Reproduce: ==================== 1. Had a 4node cluster with glusterfs-events.rpm and had added a webhook running on a separate client, say, node_x. The day ended and I closed all the open sessions. 2. The next day, created another 4 node cluster, and tried to add the same webhook, on node_x gluster-eventsapi webhook-test command showed no errors, but gluster-eventsapi webhook-add showed a traceback. However the command internally had succeeded. It was confirmed by executing the command 'gluster-eventsapi webhook-add' again, and that resulted in 'Webhook already exists' Additional info: ================== [root@dhcp35-137 ~]# systemctl status glustereventsd ● glustereventsd.service - Gluster Events Notifier Loaded: loaded (/usr/lib/systemd/system/glustereventsd.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2016-09-28 08:37:18 UTC; 6s ago Main PID: 17128 (python) CGroup: /system.slice/glustereventsd.service └─17128 python /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid Sep 28 08:37:18 dhcp35-137.lab.eng.blr.redhat.com systemd[1]: Started Gluster Events Notifier. Sep 28 08:37:18 dhcp35-137.lab.eng.blr.redhat.com systemd[1]: Starting Gluster Events Notifier... [root@dhcp35-137 ~]# [root@dhcp35-137 ~]# gluster-eventsapi status Webhooks: None +-----------------------------------+-------------+-----------------------+ | NODE | NODE STATUS | GLUSTEREVENTSD STATUS | +-----------------------------------+-------------+-----------------------+ | 10.70.35.85 | DOWN | DOWN | | dhcp35-210.lab.eng.blr.redhat.com | UP | UP | | 10.70.35.110 | UP | UP | | 10.70.35.13 | UP | UP | | localhost | UP | UP | +-----------------------------------+-------------+-----------------------+ [root@dhcp35-137 ~]# [root@dhcp35-137 ~]# gluster peer status Number of Peers: 4 Hostname: 10.70.35.85 Uuid: 187296f3-99df-46b4-b3f7-57e7314994dd State: Peer Rejected (Connected) Hostname: dhcp35-210.lab.eng.blr.redhat.com Uuid: d97a106e-4c4e-4b1c-ba02-c1ca12d594a6 State: Peer in Cluster (Connected) Hostname: 10.70.35.110 Uuid: 94c268fe-2c74-429f-8ada-cd336c476037 State: Peer in Cluster (Connected) Hostname: 10.70.35.13 Uuid: a756f3da-7896-4970-a77d-4829e603f773 State: Peer in Cluster (Connected) [root@dhcp35-137 ~]# gluster peer status Number of Peers: 3 Hostname: dhcp35-210.lab.eng.blr.redhat.com Uuid: d97a106e-4c4e-4b1c-ba02-c1ca12d594a6 State: Peer in Cluster (Connected) Hostname: 10.70.35.110 Uuid: 94c268fe-2c74-429f-8ada-cd336c476037 State: Peer in Cluster (Connected) Hostname: 10.70.35.13 Uuid: a756f3da-7896-4970-a77d-4829e603f773 State: Peer in Cluster (Connected) [root@dhcp35-137 ~]# gluster-eventsapi status Webhooks: None +-----------------------------------+-------------+-----------------------+ | NODE | NODE STATUS | GLUSTEREVENTSD STATUS | +-----------------------------------+-------------+-----------------------+ | dhcp35-210.lab.eng.blr.redhat.com | UP | UP | | 10.70.35.110 | UP | UP | | 10.70.35.13 | UP | UP | | localhost | UP | UP | +-----------------------------------+-------------+-----------------------+ [root@dhcp35-137 ~]# [root@dhcp35-137 ~]# [root@dhcp35-137 ~]# gluster-eventsapi webhook-test http://10.70.46.159:9000/listen +-----------------------------------+-------------+----------------+ | NODE | NODE STATUS | WEBHOOK STATUS | +-----------------------------------+-------------+----------------+ | dhcp35-210.lab.eng.blr.redhat.com | UP | OK | | 10.70.35.110 | UP | OK | | 10.70.35.13 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+----------------+ [root@dhcp35-137 ~]# gluster-eventsapi webhook-add http://10.70.46.159:9000/listen Traceback (most recent call last): File "/usr/sbin/gluster-eventsapi", line 459, in <module> runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 232, in run sync_to_peers() File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers out = execute_in_peers("node-reload") File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers raise GlusterCmdException((rc, out, err, " ".join(cmd))) gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload') [root@dhcp35-137 ~]# [root@dhcp35-137 ~]# gluster-eventsapi webhook-add http://10.70.46.159:9000/listen Webhook already exists [root@dhcp35-137 ~]#
Was able to reproduce it again on another setup. [root@dhcp46-239 yum.repos.d]# gluster-eventsapi webhook-test http://10.70.46.159:9000/listen +--------------+-------------+----------------+ | NODE | NODE STATUS | WEBHOOK STATUS | +--------------+-------------+----------------+ | 10.70.46.240 | UP | OK | | 10.70.46.242 | UP | OK | | 10.70.46.218 | UP | OK | | localhost | UP | OK | +--------------+-------------+----------------+ [root@dhcp46-239 yum.repos.d]# gluster-eventsapi webhook-add http://10.70.46.159:9000/listen Traceback (most recent call last): File "/usr/sbin/gluster-eventsapi", line 459, in <module> runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 232, in run sync_to_peers() File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers out = execute_in_peers("node-reload") File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers raise GlusterCmdException((rc, out, err, " ".join(cmd))) gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload') [root@dhcp46-239 yum.repos.d]#
Every gluster-eventsapi command fails with the same trace, although reporting _Success_ as the 'Error message'. [root@dhcp46-239 ~]# gluster-eventsapi sync Traceback (most recent call last): File "/usr/sbin/gluster-eventsapi", line 459, in <module> runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 455, in run sync_to_peers() File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers out = execute_in_peers("node-reload") File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers raise GlusterCmdException((rc, out, err, " ".join(cmd))) gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload') [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# gluster-eventsapi config-set log_level INFO Traceback (most recent call last): File "/usr/sbin/gluster-eventsapi", line 459, in <module> runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 405, in run sync_to_peers() File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers out = execute_in_peers("node-reload") File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers raise GlusterCmdException((rc, out, err, " ".join(cmd))) gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload') [root@dhcp46-239 ~]#
`gluster-eventsapi reload` Runs `/usr/libexec/glusterfs/peer_eventsapi.py node-reload` in every nodes. If I run that command individually then it is working. But not working via `gluster system:: execute eventsapi.py node-reload` [root@dhcp46-239 ~]# /usr/libexec/glusterfs/peer_eventsapi.py node-reload {"output": "", "ok": true, "nodeid": "ed362eb3-421c-4a25-ad0e-82ef157ea328"} [root@dhcp46-239 glusterfs]# gluster system:: execute eventsapi.py node-reload Unable to end. Error : Success Working as expected in Fedora 24, RCA is still in progress.
Got chance to debug this issue. Commands started working after I disable selinux using setenforce 0 load_policy Following denial seen using command `ausearch -m avc --start recent` time->Thu Oct 27 05:20:44 2016 type=SYSCALL msg=audit(1477560044.828:10960): arch=c000003e syscall=62 success=no exit=-13 a0=6b76 a1=c a2=0 a3=7ffec82109d0 items=0 ppid=28639 pid=6508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python2.7" subj=system_u:system_r:glusterd_t:s0 key=(null) type=AVC msg=audit(1477560044.828:10960): avc: denied { signal } for pid=6508 comm="python" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:unconfined_service_t:s0 tclass=process
Please note I am seeing an avc with tclass:udp_socket after incorporating (and activating) the local policy. Have updated BZ 1404152 with the findings.
Tested and verified this on the glusterfs build 3.8.4-11 and selinux-policy selinux-policy-3.13.1-117.el7.noarch Had a 6 node cluster with eventing enabled. I no longer see a traceback while adding/deleting a webhook, nor do I see any new avc:denied in the audit logs. Moving this BZ to verified in 3.2. Please note I did a local update of selinux-policy build from brew as it is not yet merged to 7.3. Logs are pasted below: [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# gluster peer status Number of Peers: 5 Hostname: dhcp46-242.lab.eng.blr.redhat.com Uuid: 838465bf-1fd8-4f85-8599-dbc8367539aa State: Peer in Cluster (Connected) Hostname: 10.70.46.240 Uuid: 5bff39d7-cd9c-4dbb-86eb-2a7ba6dfea3d State: Peer in Cluster (Connected) Hostname: 10.70.46.218 Uuid: c2fbc432-b7a9-4db1-9b9d-a8d82e998923 State: Peer in Cluster (Connected) Hostname: 10.70.46.221 Uuid: 1277cf78-640e-46e8-a3d1-46e067508814 State: Peer in Cluster (Connected) Hostname: 10.70.46.222 Uuid: 81184471-cbf7-47aa-ba41-21f32bb644b0 State: Peer in Cluster (Connected) [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# rpm -qa | grep gluster python-gluster-3.8.4-11.el7rhgs.noarch glusterfs-rdma-3.8.4-11.el7rhgs.x86_64 glusterfs-events-3.8.4-11.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-11.el7rhgs.x86_64 glusterfs-server-3.8.4-11.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-3.8.4-11.el7rhgs.x86_64 glusterfs-fuse-3.8.4-11.el7rhgs.x86_64 glusterfs-cli-3.8.4-11.el7rhgs.x86_64 glusterfs-libs-3.8.4-11.el7rhgs.x86_64 vdsm-gluster-4.17.33-1.1.el7rhgs.noarch glusterfs-api-3.8.4-11.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-11.el7rhgs.x86_64 gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64 [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# rpm -qa | grep selinux-policy selinux-policy-targeted-3.13.1-117.el7.noarch selinux-policy-3.13.1-117.el7.noarch [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# [root@dhcp46-221 ~]# gluster-eventsapi status Webhooks: None +-----------------------------------+-------------+-----------------------+ | NODE | NODE STATUS | GLUSTEREVENTSD STATUS | +-----------------------------------+-------------+-----------------------+ | dhcp46-242.lab.eng.blr.redhat.com | UP | OK | | 10.70.46.239 | UP | OK | | 10.70.46.240 | UP | OK | | 10.70.46.218 | UP | OK | | 10.70.46.222 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-----------------------+ [root@dhcp46-221 ~]# gluster-eventsapi webhook-add http://10.70.46.159:9000/listen +-----------------------------------+-------------+-------------+ | NODE | NODE STATUS | SYNC STATUS | +-----------------------------------+-------------+-------------+ | dhcp46-242.lab.eng.blr.redhat.com | UP | OK | | 10.70.46.239 | UP | OK | | 10.70.46.240 | UP | OK | | 10.70.46.218 | UP | OK | | 10.70.46.222 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-------------+ [root@dhcp46-221 ~]# [root@dhcp46-221 ~]# [root@dhcp46-221 ~]# gluster-eventsapi status Webhooks: http://10.70.46.159:9000/listen +-----------------------------------+-------------+-----------------------+ | NODE | NODE STATUS | GLUSTEREVENTSD STATUS | +-----------------------------------+-------------+-----------------------+ | dhcp46-242.lab.eng.blr.redhat.com | UP | OK | | 10.70.46.239 | UP | OK | | 10.70.46.240 | UP | OK | | 10.70.46.218 | UP | OK | | 10.70.46.222 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-----------------------+ [root@dhcp46-221 ~]# [root@dhcp46-221 ~]# getenforce Enforcing [root@dhcp46-221 ~]# [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# gluster-eventsapi webhook-test http://10.70.46.116:9000/listen +-----------------------------------+-------------+----------------+ | NODE | NODE STATUS | WEBHOOK STATUS | +-----------------------------------+-------------+----------------+ | dhcp46-242.lab.eng.blr.redhat.com | UP | OK | | 10.70.46.240 | UP | OK | | 10.70.46.218 | UP | OK | | 10.70.46.221 | UP | OK | | 10.70.46.222 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+----------------+ [root@dhcp46-239 ~]# [root@dhcp46-239 ~]# gluster-eventsapi webhook-add http://10.70.46.116:9000/listen +-----------------------------------+-------------+-------------+ | NODE | NODE STATUS | SYNC STATUS | +-----------------------------------+-------------+-------------+ | dhcp46-242.lab.eng.blr.redhat.com | UP | OK | | 10.70.46.240 | UP | OK | | 10.70.46.218 | UP | OK | | 10.70.46.221 | UP | OK | | 10.70.46.222 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-------------+ [root@dhcp46-239 ~]#
RHEL6 logs pasted below: [root@dhcp35-101 ~]# [root@dhcp35-101 ~]# gluster peer status Number of Peers: 3 Hostname: 10.70.35.115 Uuid: 93961da0-9902-4db8-9e64-86c537f1a561 State: Peer in Cluster (Connected) Hostname: dhcp35-100.lab.eng.blr.redhat.com Uuid: 22b76db6-61cd-46e5-9f02-7297f60b853d State: Peer in Cluster (Connected) Hostname: 10.70.35.104 Uuid: df94e02e-29de-401d-b7bb-5f3004223321 State: Peer in Cluster (Connected) [root@dhcp35-101 ~]# [root@dhcp35-101 ~]# [root@dhcp35-101 ~]# rpm -qa | grep gluster gluster-nagios-common-0.2.4-1.el6rhs.noarch glusterfs-server-3.8.4-11.el6rhs.x86_64 vdsm-gluster-4.16.30-1.5.el6rhs.noarch glusterfs-fuse-3.8.4-11.el6rhs.x86_64 glusterfs-libs-3.8.4-11.el6rhs.x86_64 glusterfs-api-3.8.4-11.el6rhs.x86_64 python-gluster-3.8.4-11.el6rhs.noarch glusterfs-rdma-3.8.4-11.el6rhs.x86_64 glusterfs-3.8.4-11.el6rhs.x86_64 glusterfs-cli-3.8.4-11.el6rhs.x86_64 glusterfs-events-3.8.4-11.el6rhs.x86_64 glusterfs-client-xlators-3.8.4-11.el6rhs.x86_64 gluster-nagios-addons-0.2.8-1.el6rhs.x86_64 glusterfs-geo-replication-3.8.4-11.el6rhs.x86_64 [root@dhcp35-101 ~]# [root@dhcp35-101 ~]# [root@dhcp35-101 ~]# rpm -qa | grep selinux-policy selinux-policy-3.7.19-292.el6_8.2.noarch selinux-policy-targeted-3.7.19-292.el6_8.2.noarch [root@dhcp35-101 ~]# [root@dhcp35-115 ~]# gluster-eventsapi status Webhooks: None +-----------------------------------+-------------+-----------------------+ | NODE | NODE STATUS | GLUSTEREVENTSD STATUS | +-----------------------------------+-------------+-----------------------+ | dhcp35-100.lab.eng.blr.redhat.com | UP | OK | | 10.70.35.101 | UP | OK | | 10.70.35.104 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-----------------------+ [root@dhcp35-115 ~]# [root@dhcp35-115 ~]# [root@dhcp35-115 ~]# gluster-eventsapi webhook-test http://10.70.35.109:9000/listen +-----------------------------------+-------------+----------------+ | NODE | NODE STATUS | WEBHOOK STATUS | +-----------------------------------+-------------+----------------+ | dhcp35-100.lab.eng.blr.redhat.com | UP | OK | | 10.70.35.101 | UP | OK | | 10.70.35.104 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+----------------+ [root@dhcp35-115 ~]# [root@dhcp35-101 ~]# gluster-eventsapi webhook-add http://10.70.35.109:9000/listen +-----------------------------------+-------------+-------------+ | NODE | NODE STATUS | SYNC STATUS | +-----------------------------------+-------------+-------------+ | 10.70.35.115 | UP | OK | | dhcp35-100.lab.eng.blr.redhat.com | UP | OK | | 10.70.35.104 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-------------+ [root@dhcp35-101 ~]# [root@dhcp35-101 ~]# gluster-eventsapi webhook-add http://10.70.35.119:9000/listen +-----------------------------------+-------------+-------------+ | NODE | NODE STATUS | SYNC STATUS | +-----------------------------------+-------------+-------------+ | 10.70.35.115 | UP | OK | | dhcp35-100.lab.eng.blr.redhat.com | UP | OK | | 10.70.35.104 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-------------+ [root@dhcp35-101 ~]# [root@dhcp35-101 ~]# gluster-eventsapi status Webhooks: http://10.70.35.109:9000/listen http://10.70.35.119:9000/listen +-----------------------------------+-------------+-----------------------+ | NODE | NODE STATUS | GLUSTEREVENTSD STATUS | +-----------------------------------+-------------+-----------------------+ | 10.70.35.115 | UP | OK | | dhcp35-100.lab.eng.blr.redhat.com | UP | OK | | 10.70.35.104 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-----------------------+ [root@dhcp35-101 ~]# [root@dhcp35-101 ~]# [root@dhcp35-115 ~]# gluster-eventsapi webhook-del http://10.70.35.109:9000/listen +-----------------------------------+-------------+-------------+ | NODE | NODE STATUS | SYNC STATUS | +-----------------------------------+-------------+-------------+ | dhcp35-100.lab.eng.blr.redhat.com | UP | OK | | 10.70.35.101 | UP | OK | | 10.70.35.104 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-------------+ [root@dhcp35-115 ~]# gluster-eventsapi status Webhooks: http://10.70.35.119:9000/listen +-----------------------------------+-------------+-----------------------+ | NODE | NODE STATUS | GLUSTEREVENTSD STATUS | +-----------------------------------+-------------+-----------------------+ | dhcp35-100.lab.eng.blr.redhat.com | UP | OK | | 10.70.35.101 | UP | OK | | 10.70.35.104 | UP | OK | | localhost | UP | OK | +-----------------------------------+-------------+-----------------------+ [root@dhcp35-115 ~]#
Hit the same traceback (on rhel6.8), but with a different avc with selinux-policy-3.7.19-292.el6_8.3. I do not see the tclass:process avc for which this was originally logged. Have raised BZ 1419869 to track the new avc. A workaround is also mentioned in the same BZ. Moving the present BZ to verified in 3.2. Logs are pasted in comment22 and 23.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html