RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1496453 - iptables-services w/ SELinux Enforcing: ip6tables fails
Summary: iptables-services w/ SELinux Enforcing: ip6tables fails
Keywords:
Status: CLOSED DUPLICATE of bug 1438937
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.5
Hardware: Unspecified
OS: Unspecified
urgent
unspecified
Target Milestone: rc
: ---
Assignee: Lukas Vrabec
QA Contact: BaseOS QE Security Team
URL:
Whiteboard:
Depends On:
Blocks: 1494907
TreeView+ depends on / blocked
 
Reported: 2017-09-27 12:41 UTC by Lon Hohberger
Modified: 2017-09-29 07:45 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1494907
Environment:
Last Closed: 2017-09-29 07:45:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Lon Hohberger 2017-09-27 12:41:33 UTC
+++ This bug was initially created as a clone of Bug #1494907 +++

Unable to launch nova instance: Instance failed to spawn: libvirtError: internal error: process exited while connecting to monitor: libvirt:  error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied


Environment:
libselinux-ruby-2.5-11.el7.x86_64
libselinux-utils-2.5-11.el7.x86_64
openstack-tripleo-heat-templates-7.0.1-0.20170919183703.el7ost.noarch
openstack-puppet-modules-11.0.0-0.20170828113154.el7ost.noarch
libselinux-2.5-11.el7.x86_64
libselinux-python-2.5-11.el7.x86_64
container-selinux-2.21-2.gitba103ac.el7.noarch
instack-undercloud-7.4.1-0.20170912115418.el7ost.noarch
openstack-selinux-0.8.10-0.20170914195211.e16a8f8.2.el7ost.noarch
selinux-policy-3.13.1-166.el7_4.4.noarch
selinux-policy-targeted-3.13.1-166.el7_4.4.noarch


Steps to reproduce:
1. Deploy OC
2. Try to launch an instance.

Result:
The instance ends up with state error. Going through nova log:
2017-09-24 01:01:06.292 1 ERROR nova.virt.libvirt.guest [req-43353dfd-e8b4-4b5f-8685-dafe3f2d9b36 49d05024228a4808976cb1ba072d2250 af8abd30994e4da9865a8db7d9c68e21 - default default] Error launching a defined domain with XML: <domain type='kvm'>
2017-09-24 01:01:06.293 1 ERROR nova.virt.libvirt.driver [req-43353dfd-e8b4-4b5f-8685-dafe3f2d9b36 49d05024228a4808976cb1ba072d2250 af8abd30994e4da9865a8db7d9c68e21 - default default] [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] Failed to start libvirt guest: libvirtError: internal error: process exited while connecting to monitor: libvirt:  error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [req-43353dfd-e8b4-4b5f-8685-dafe3f2d9b36 49d05024228a4808976cb1ba072d2250 af8abd30994e4da9865a8db7d9c68e21 - default default] [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] Instance failed to spawn: libvirtError: internal error: process exited while connecting to monitor: libvirt:  error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] Traceback (most recent call last):
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2165, in _build_resources
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     yield resources
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1980, in _build_and_run_instance
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     block_device_info=block_device_info)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2805, in spawn
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     destroy_disks_on_failure=True)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5282, in _create_domain_and_network
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     destroy_disks_on_failure)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     self.force_reraise()
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     six.reraise(self.type_, self.value, self.tb)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5252, in _create_domain_and_network
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     post_xml_callback=post_xml_callback)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5170, in _create_domain
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     guest.launch(pause=pause)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 144, in launch
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     self._encoded_xml, errors='ignore')
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     self.force_reraise()
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     six.reraise(self.type_, self.value, self.tb)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 139, in launch
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     return self._domain.createWithFlags(flags)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     rv = execute(f, *args, **kwargs)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     six.reraise(c, e, tb)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     rv = meth(*args, **kwargs)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in createWithFlags
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] libvirtError: internal error: process exited while connecting to monitor: libvirt:  error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied
2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] 




The following is in /var/log/audit/audit.log:
type=AVC msg=audit(1506213233.953:115): avc:  denied  { read } for  pid=17376 comm="grep" name="kvm.conf" dev="vda2" ino=12583138 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file
type=AVC msg=audit(1506213233.953:116): avc:  denied  { read } for  pid=17376 comm="grep" name="lockd.conf" dev="vda2" ino=12583135 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file
type=AVC msg=audit(1506213233.953:117): avc:  denied  { read } for  pid=17376 comm="grep" name="mlx4.conf" dev="vda2" ino=12583136 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file
type=AVC msg=audit(1506213233.953:118): avc:  denied  { read } for  pid=17376 comm="grep" name="truescale.conf" dev="vda2" ino=12583137 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file
type=AVC msg=audit(1506213233.953:119): avc:  denied  { read } for  pid=17376 comm="grep" name="tuned.conf" dev="vda2" ino=12583134 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file
type=AVC msg=audit(1506213233.953:120): avc:  denied  { read } for  pid=17376 comm="grep" name="vhost.conf" dev="vda2" ino=12583133 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file
type=USER_AVC msg=audit(1506214343.992:1218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { status } for auid=n/a uid=0 gid=0 path="/run/systemd/system/docker-6e9b6620d5da204c6d6e337fa525648627a24e1ef2ca77e4a1cb8aa7adbb018c.scope" cmdline="/usr/lib/systemd/systemd-machined" scontext=system_u:system_r:systemd_machined_t:s0 tcontext=system_u:object_r:container_unit_file_t:s0 tclass=service  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'
type=USER_AVC msg=audit(1506214343.993:1219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { status } for auid=n/a uid=0 gid=0 path="/run/systemd/system/docker-6e9b6620d5da204c6d6e337fa525648627a24e1ef2ca77e4a1cb8aa7adbb018c.scope" cmdline="/usr/lib/systemd/systemd-machined" scontext=system_u:system_r:systemd_machined_t:s0 tcontext=system_u:object_r:container_unit_file_t:s0 tclass=service  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'
type=AVC msg=audit(1506214866.247:1301): avc:  denied  { transition } for  pid=32908 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="vda2" ino=37857716 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:svirt_t:s0:c40,c543 tclass=process




Setting the selinux to permissive on compute node(s) makes it possible to launch an instance.

--- Additional comment from Lon Hohberger on 2017-09-26 14:01:06 EDT ---

Okay, so the grep there is from the iptables-service package:

   grep -qIsE "^install[[:space:]]+${_IPV}[[:space:]]+/bin/(true|false)" /etc/modprobe.conf /etc/modprobe.d/*

So, starting ip6tables causes it to grep everything in /etc/modprobe.d/*. This needs an audit rule, but I don't think this is causing instances to fail to launch.


================================


TL;DR: the iptables-services package provides 'iptables' and 'ip6tables' initscripts and unit files. The grep line in ip6tables executes grep on everything in /etc/modprobe.d (modules_conf_t) looking to see if the ipv6 module is loaded.

As a consequence, the firewall will not correctly start in SELinux enforcing mode when iptables-services is used to control the firewall in conjunction with ipv6.

Something like this is likely required:

allow iptables_t modules_conf_t:file read_file_perms;

Comment 1 Milos Malik 2017-09-28 08:49:58 UTC
I believe this bug is a duplicate of BZ#1438937.

Comment 2 Milos Malik 2017-09-28 08:55:30 UTC
Following SELinux denials are also part of comment#0 and they should be addressed too:

type=USER_AVC msg=audit(1506214343.993:1219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  denied  { status } for auid=n/a uid=0 gid=0 path="/run/systemd/system/docker-6e9b6620d5da204c6d6e337fa525648627a24e1ef2ca77e4a1cb8aa7adbb018c.scope" cmdline="/usr/lib/systemd/systemd-machined" scontext=system_u:system_r:systemd_machined_t:s0 tcontext=system_u:object_r:container_unit_file_t:s0 tclass=service  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'

type=AVC msg=audit(1506214866.247:1301): avc:  denied  { transition } for  pid=32908 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="vda2" ino=37857716 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:svirt_t:s0:c40,c543 tclass=process

I would even say that that the last SELinux denial is the cause for:

Unable to launch nova instance: Instance failed to spawn: libvirtError: internal error: process exited while connecting to monitor: libvirt:  error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied

Comment 3 Lukas Vrabec 2017-09-29 07:45:28 UTC

*** This bug has been marked as a duplicate of bug 1438937 ***


Note You need to log in before you can comment on or make changes to this bug.