RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1404152 - [SELinux] [Eventing]: gluster-eventsapi shows a traceback while adding a webhook
Summary: [SELinux] [Eventing]: gluster-eventsapi shows a traceback while adding a webhook
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.3
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Lukas Vrabec
QA Contact: Milos Malik
Mirek Jahoda
URL:
Whiteboard:
Depends On:
Blocks: 1404562 1408128
TreeView+ depends on / blocked
 
Reported: 2016-12-13 08:34 UTC by Prasanth
Modified: 2017-08-01 15:17 UTC (History)
13 users (show)

Fixed In Version: selinux-policy-3.13.1-117.el7
Doc Type: Bug Fix
Doc Text:
A missing SELinux rule was previously causing errors when adding a webhook using the gluster-eventsapi command. The rule to allow "glusterd_t" domain binds on glusterd UDP port has been added, and adding a webhook using gluster-eventsapi now works properly.
Clone Of: 1379963
: 1408128 (view as bug list)
Environment:
Last Closed: 2017-08-01 15:17:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Local module (450 bytes, text/plain)
2016-12-13 12:57 UTC, Lukas Vrabec
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1861 0 normal SHIPPED_LIVE selinux-policy bug fix update 2017-08-01 17:50:24 UTC

Description Prasanth 2016-12-13 08:34:31 UTC
+++ This bug was initially created as a clone of Bug #1379963 +++

Description of problem:
=======================
gluster-eventsapi command while adding a webhook displayed the below trace:
Traceback (most recent call last):
  File "/usr/sbin/gluster-eventsapi", line 459, in <module>
    runcli()
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli
    cls.run(args)
  File "/usr/sbin/gluster-eventsapi", line 232, in run
    sync_to_peers()
  File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers
    out = execute_in_peers("node-reload")
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers
    raise GlusterCmdException((rc, out, err, " ".join(cmd)))
gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload')

However, webhook-add operation had succeeded (as seen in the last line of traceback). Hence, the priority 'medium'


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.8.4-1


How reproducible:
=================
1:1


Steps to Reproduce:
====================
1. Had a 4node cluster with glusterfs-events.rpm and had added a webhook running on a separate client, say, node_x. The day ended and I closed all the open sessions.
2. The next day, created another 4 node cluster, and tried to add the same webhook, on node_x
gluster-eventsapi webhook-test command showed no errors, but gluster-eventsapi webhook-add showed a traceback. However the command internally had succeeded. It was confirmed by executing the command 'gluster-eventsapi webhook-add' again, and that resulted in 'Webhook already exists'


Additional info:
==================

[root@dhcp35-137 ~]# systemctl status glustereventsd
● glustereventsd.service - Gluster Events Notifier
   Loaded: loaded (/usr/lib/systemd/system/glustereventsd.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2016-09-28 08:37:18 UTC; 6s ago
 Main PID: 17128 (python)
   CGroup: /system.slice/glustereventsd.service
           └─17128 python /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid

Sep 28 08:37:18 dhcp35-137.lab.eng.blr.redhat.com systemd[1]: Started Gluster Events Notifier.
Sep 28 08:37:18 dhcp35-137.lab.eng.blr.redhat.com systemd[1]: Starting Gluster Events Notifier...
[root@dhcp35-137 ~]# 
[root@dhcp35-137 ~]# gluster-eventsapi status
Webhooks: None

+-----------------------------------+-------------+-----------------------+
|                NODE               | NODE STATUS | GLUSTEREVENTSD STATUS |
+-----------------------------------+-------------+-----------------------+
|            10.70.35.85            |        DOWN |                  DOWN |
| dhcp35-210.lab.eng.blr.redhat.com |          UP |                    UP |
|            10.70.35.110           |          UP |                    UP |
|            10.70.35.13            |          UP |                    UP |
|             localhost             |          UP |                    UP |
+-----------------------------------+-------------+-----------------------+
[root@dhcp35-137 ~]# 
[root@dhcp35-137 ~]# gluster peer status
Number of Peers: 4

Hostname: 10.70.35.85
Uuid: 187296f3-99df-46b4-b3f7-57e7314994dd
State: Peer Rejected (Connected)

Hostname: dhcp35-210.lab.eng.blr.redhat.com
Uuid: d97a106e-4c4e-4b1c-ba02-c1ca12d594a6
State: Peer in Cluster (Connected)

Hostname: 10.70.35.110
Uuid: 94c268fe-2c74-429f-8ada-cd336c476037
State: Peer in Cluster (Connected)

Hostname: 10.70.35.13
Uuid: a756f3da-7896-4970-a77d-4829e603f773
State: Peer in Cluster (Connected)
[root@dhcp35-137 ~]# gluster peer status
Number of Peers: 3

Hostname: dhcp35-210.lab.eng.blr.redhat.com
Uuid: d97a106e-4c4e-4b1c-ba02-c1ca12d594a6
State: Peer in Cluster (Connected)

Hostname: 10.70.35.110
Uuid: 94c268fe-2c74-429f-8ada-cd336c476037
State: Peer in Cluster (Connected)

Hostname: 10.70.35.13
Uuid: a756f3da-7896-4970-a77d-4829e603f773
State: Peer in Cluster (Connected)
[root@dhcp35-137 ~]# gluster-eventsapi status
Webhooks: None

+-----------------------------------+-------------+-----------------------+
|                NODE               | NODE STATUS | GLUSTEREVENTSD STATUS |
+-----------------------------------+-------------+-----------------------+
| dhcp35-210.lab.eng.blr.redhat.com |          UP |                    UP |
|            10.70.35.110           |          UP |                    UP |
|            10.70.35.13            |          UP |                    UP |
|             localhost             |          UP |                    UP |
+-----------------------------------+-------------+-----------------------+
[root@dhcp35-137 ~]# 
[root@dhcp35-137 ~]# 
[root@dhcp35-137 ~]# gluster-eventsapi webhook-test http://10.70.46.159:9000/listen
+-----------------------------------+-------------+----------------+
|                NODE               | NODE STATUS | WEBHOOK STATUS |
+-----------------------------------+-------------+----------------+
| dhcp35-210.lab.eng.blr.redhat.com |          UP |             OK |
|            10.70.35.110           |          UP |             OK |
|            10.70.35.13            |          UP |             OK |
|             localhost             |          UP |             OK |
+-----------------------------------+-------------+----------------+
[root@dhcp35-137 ~]# gluster-eventsapi webhook-add http://10.70.46.159:9000/listen
Traceback (most recent call last):
  File "/usr/sbin/gluster-eventsapi", line 459, in <module>
    runcli()
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli
    cls.run(args)
  File "/usr/sbin/gluster-eventsapi", line 232, in run
    sync_to_peers()
  File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers
    out = execute_in_peers("node-reload")
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers
    raise GlusterCmdException((rc, out, err, " ".join(cmd)))
gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload')
[root@dhcp35-137 ~]# 
[root@dhcp35-137 ~]# gluster-eventsapi webhook-add http://10.70.46.159:9000/listen
Webhook already exists
[root@dhcp35-137 ~]#

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-09-28 05:22:25 EDT ---

This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Sweta Anandpara on 2016-09-28 06:28:12 EDT ---

Was able to reproduce it again on another setup.

[root@dhcp46-239 yum.repos.d]# gluster-eventsapi webhook-test http://10.70.46.159:9000/listen
+--------------+-------------+----------------+
|     NODE     | NODE STATUS | WEBHOOK STATUS |
+--------------+-------------+----------------+
| 10.70.46.240 |          UP |             OK |
| 10.70.46.242 |          UP |             OK |
| 10.70.46.218 |          UP |             OK |
|  localhost   |          UP |             OK |
+--------------+-------------+----------------+
[root@dhcp46-239 yum.repos.d]# gluster-eventsapi webhook-add http://10.70.46.159:9000/listen
Traceback (most recent call last):
  File "/usr/sbin/gluster-eventsapi", line 459, in <module>
    runcli()
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli
    cls.run(args)
  File "/usr/sbin/gluster-eventsapi", line 232, in run
    sync_to_peers()
  File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers
    out = execute_in_peers("node-reload")
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers
    raise GlusterCmdException((rc, out, err, " ".join(cmd)))
gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload')
[root@dhcp46-239 yum.repos.d]#

--- Additional comment from Sweta Anandpara on 2016-09-29 01:53:35 EDT ---

Every gluster-eventsapi command fails with the same trace, although reporting _Success_ as the 'Error message'.

[root@dhcp46-239 ~]# gluster-eventsapi sync
Traceback (most recent call last):
  File "/usr/sbin/gluster-eventsapi", line 459, in <module>
    runcli()
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli
    cls.run(args)
  File "/usr/sbin/gluster-eventsapi", line 455, in run
    sync_to_peers()
  File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers
    out = execute_in_peers("node-reload")
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers
    raise GlusterCmdException((rc, out, err, " ".join(cmd)))
gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload')
[root@dhcp46-239 ~]# 


[root@dhcp46-239 ~]# gluster-eventsapi config-set log_level INFO
Traceback (most recent call last):
  File "/usr/sbin/gluster-eventsapi", line 459, in <module>
    runcli()
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 212, in runcli
    cls.run(args)
  File "/usr/sbin/gluster-eventsapi", line 405, in run
    sync_to_peers()
  File "/usr/sbin/gluster-eventsapi", line 129, in sync_to_peers
    out = execute_in_peers("node-reload")
  File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 125, in execute_in_peers
    raise GlusterCmdException((rc, out, err, " ".join(cmd)))
gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute eventsapi.py node-reload')
[root@dhcp46-239 ~]#

--- Additional comment from Aravinda VK on 2016-09-29 04:30:43 EDT ---

`gluster-eventsapi reload`  Runs `/usr/libexec/glusterfs/peer_eventsapi.py node-reload` in every nodes. If I run that command individually then it is working. But not working via `gluster system:: execute eventsapi.py node-reload`

[root@dhcp46-239 ~]# /usr/libexec/glusterfs/peer_eventsapi.py node-reload
{"output": "", "ok": true, "nodeid": "ed362eb3-421c-4a25-ad0e-82ef157ea328"}

[root@dhcp46-239 glusterfs]# gluster system:: execute eventsapi.py node-reload
Unable to end. Error : Success

Working as expected in Fedora 24, RCA is still in progress.

--- Additional comment from Aravinda VK on 2016-10-27 05:31:36 EDT ---

Got chance to debug this issue. Commands started working after I disable selinux using

setenforce 0
load_policy

Following denial seen using command `ausearch -m avc --start recent`

time->Thu Oct 27 05:20:44 2016
type=SYSCALL msg=audit(1477560044.828:10960): arch=c000003e syscall=62 success=no exit=-13 a0=6b76 a1=c a2=0 a3=7ffec82109d0 items=0 ppid=28639 pid=6508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python2.7" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1477560044.828:10960): avc:  denied  { signal } for  pid=6508 comm="python" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:unconfined_service_t:s0 tclass=process

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-07 06:43:57 EST ---

This bug is automatically being provided 'pm_ack+' for the release flag 'rhgs‑3.2.0', the current release of Red Hat Gluster Storage 3 under active development, having been appropriately marked for the release, and having been provided ACK from Development and QE

If the 'blocker' flag had been proposed/set on this BZ, it has now been unset, since the 'blocker' flag is not valid for the current phase of RHGS 3.2.0 development

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-11-08 00:05:16 EST ---

Since this bug has been approved for the RHGS 3.2.0 release of Red Hat Gluster Storage 3, through release flag 'rhgs-3.2.0+', and through the Internal Whiteboard entry of '3.2.0', the Target Release is being automatically set to 'RHGS 3.2.0'

Comment 1 Prasanth 2016-12-13 08:36:57 UTC
audit.log.3:type=AVC msg=audit(1481527787.222:2133185): avc:  denied  { signal } for  pid=11207 comm="python" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:unconfined_service_t:s0 tclass=process

Comment 3 Lukas Vrabec 2016-12-13 12:46:39 UTC
After discussion with Prasanth, We found solution for this issue. 
We need to label following:
/usr/libexec/glusterfs/peer_eventsapi.py
usr/libexec/glusterfs/events/glustereventsd.py

as glusterd_exec_t. To avoid running glustereventsd as unconfined_service_t.

Comment 4 Lukas Vrabec 2016-12-13 12:57:03 UTC
Created attachment 1231190 [details]
Local module

Adding local module for this issue. Please run
# semodule -i glusterd_local.cil 

to active it. 

Thanks.

Comment 7 Sweta Anandpara 2016-12-15 06:19:08 UTC
Hi Lukas,

I tried out the local attachment in my setup. But I am still able to see the same avc. Could you please go through the steps that I am following and advise if I am missing out anything. 

1. Copy the content of the attachment to a file 'glusterd_local.cil' under /root of ALL the nodes of the cluster
2. 'semodule -i /root/glusterd_local.cil' on ALL the nodes of the cluster
3. 'restorecon -Rv /usr/libexec/glusterfs/peer_eventsapi.py; restorecon -Rv /usr/sbin/gluster-eventsapi; restorecon Rv /usr/libexec/glusterfs/events/glustereventsd.py; restorecon -Rv /usr/sbin/glustereventsd' on ALL the nodes of the cluster
4. Run the steps that caused the avc to appear

Comment 9 Milos Malik 2016-12-15 11:20:33 UTC
Following step cannot work on RHEL-6, because CIL policy modules are not understood on RHEL<7.3:

2. 'semodule -i /root/glusterd_local.cil' on ALL the nodes of the cluster

Comment 10 Milos Malik 2016-12-15 11:23:46 UTC
Please ignore comments #8 and #9. They should have been placed in BZ#1404562.

Comment 11 Sweta Anandpara 2016-12-16 07:12:16 UTC
After incorporating the local attachement, I'm facing a different avc denied, as pasted below, this time tclass=udp_socket

type=AVC msg=audit(1481871612.817:874221): avc:  denied  { name_bind } for  pid=11854 comm="python" src=24009 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=udp_socket

I stopped the glustereventsd service, stopped rsyslog, flushed audit logs, started rsyslog, and started glustereventsd service.
The status shows that it is running, but in the logs it says 'Permission denied'. When the avc logs were checked, I found the above mentioned avc. 

Set the selinux to 'permissive mode' and then re-did the steps. This time, stopping and starting glustereventsd service went through without any errors.

Lukas, can you please advise the next step of action?

Comment 12 Sweta Anandpara 2016-12-16 07:14:22 UTC
The cli logs are pasted below. Please note the 'permission denied' in the last line.

[root@dhcp47-60 audit]# systemctl stop glustereventsd
[root@dhcp47-60 audit]# systemctl start glustereventsd
[root@dhcp47-60 audit]# systemctl status glustereventsd
● glustereventsd.service - Gluster Events Notifier
   Loaded: loaded (/usr/lib/systemd/system/glustereventsd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-12-16 12:25:59 IST; 10s ago
  Process: 19145 ExecReload=/bin/kill -SIGUSR2 $MAINPID (code=exited, status=0/SUCCESS)
 Main PID: 21141 (python)
   CGroup: /system.slice/glustereventsd.service
           ├─21141 python /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid
           └─21142 python /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid

Dec 16 12:25:59 dhcp47-60.lab.eng.blr.redhat.com systemd[1]: Started Gluster Events Notifier.
Dec 16 12:25:59 dhcp47-60.lab.eng.blr.redhat.com systemd[1]: Starting Gluster Events Notifier...
Dec 16 12:25:59 dhcp47-60.lab.eng.blr.redhat.com glustereventsd[21141]: Failed to start Eventsd: [Errno 13] Permission denied
[root@dhcp47-60 audit]#

Comment 19 errata-xmlrpc 2017-08-01 15:17:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1861


Note You need to log in before you can comment on or make changes to this bug.