Hide Forgot
Unable to launch nova instance: Instance failed to spawn: libvirtError: internal error: process exited while connecting to monitor: libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied Environment: libselinux-ruby-2.5-11.el7.x86_64 libselinux-utils-2.5-11.el7.x86_64 openstack-tripleo-heat-templates-7.0.1-0.20170919183703.el7ost.noarch openstack-puppet-modules-11.0.0-0.20170828113154.el7ost.noarch libselinux-2.5-11.el7.x86_64 libselinux-python-2.5-11.el7.x86_64 container-selinux-2.21-2.gitba103ac.el7.noarch instack-undercloud-7.4.1-0.20170912115418.el7ost.noarch openstack-selinux-0.8.10-0.20170914195211.e16a8f8.2.el7ost.noarch selinux-policy-3.13.1-166.el7_4.4.noarch selinux-policy-targeted-3.13.1-166.el7_4.4.noarch Steps to reproduce: 1. Deploy OC 2. Try to launch an instance. Result: The instance ends up with state error. Going through nova log: 2017-09-24 01:01:06.292 1 ERROR nova.virt.libvirt.guest [req-43353dfd-e8b4-4b5f-8685-dafe3f2d9b36 49d05024228a4808976cb1ba072d2250 af8abd30994e4da9865a8db7d9c68e21 - default default] Error launching a defined domain with XML: <domain type='kvm'> 2017-09-24 01:01:06.293 1 ERROR nova.virt.libvirt.driver [req-43353dfd-e8b4-4b5f-8685-dafe3f2d9b36 49d05024228a4808976cb1ba072d2250 af8abd30994e4da9865a8db7d9c68e21 - default default] [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] Failed to start libvirt guest: libvirtError: internal error: process exited while connecting to monitor: libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [req-43353dfd-e8b4-4b5f-8685-dafe3f2d9b36 49d05024228a4808976cb1ba072d2250 af8abd30994e4da9865a8db7d9c68e21 - default default] [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] Instance failed to spawn: libvirtError: internal error: process exited while connecting to monitor: libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] Traceback (most recent call last): 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2165, in _build_resources 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] yield resources 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1980, in _build_and_run_instance 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] block_device_info=block_device_info) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2805, in spawn 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] destroy_disks_on_failure=True) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5282, in _create_domain_and_network 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] destroy_disks_on_failure) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] self.force_reraise() 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] six.reraise(self.type_, self.value, self.tb) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5252, in _create_domain_and_network 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] post_xml_callback=post_xml_callback) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5170, in _create_domain 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] guest.launch(pause=pause) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 144, in launch 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] self._encoded_xml, errors='ignore') 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] self.force_reraise() 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] six.reraise(self.type_, self.value, self.tb) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 139, in launch 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] return self._domain.createWithFlags(flags) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] result = proxy_call(self._autowrap, f, *args, **kwargs) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] rv = execute(f, *args, **kwargs) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] six.reraise(c, e, tb) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] rv = meth(*args, **kwargs) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in createWithFlags 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] libvirtError: internal error: process exited while connecting to monitor: libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied 2017-09-24 01:01:07.386 1 ERROR nova.compute.manager [instance: 3d42b7bf-d5bd-46c1-8747-dd44c38929c8] The following is in /var/log/audit/audit.log: type=AVC msg=audit(1506213233.953:115): avc: denied { read } for pid=17376 comm="grep" name="kvm.conf" dev="vda2" ino=12583138 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file type=AVC msg=audit(1506213233.953:116): avc: denied { read } for pid=17376 comm="grep" name="lockd.conf" dev="vda2" ino=12583135 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file type=AVC msg=audit(1506213233.953:117): avc: denied { read } for pid=17376 comm="grep" name="mlx4.conf" dev="vda2" ino=12583136 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file type=AVC msg=audit(1506213233.953:118): avc: denied { read } for pid=17376 comm="grep" name="truescale.conf" dev="vda2" ino=12583137 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file type=AVC msg=audit(1506213233.953:119): avc: denied { read } for pid=17376 comm="grep" name="tuned.conf" dev="vda2" ino=12583134 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file type=AVC msg=audit(1506213233.953:120): avc: denied { read } for pid=17376 comm="grep" name="vhost.conf" dev="vda2" ino=12583133 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:modules_conf_t:s0 tclass=file type=USER_AVC msg=audit(1506214343.992:1218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc: denied { status } for auid=n/a uid=0 gid=0 path="/run/systemd/system/docker-6e9b6620d5da204c6d6e337fa525648627a24e1ef2ca77e4a1cb8aa7adbb018c.scope" cmdline="/usr/lib/systemd/systemd-machined" scontext=system_u:system_r:systemd_machined_t:s0 tcontext=system_u:object_r:container_unit_file_t:s0 tclass=service exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?' type=USER_AVC msg=audit(1506214343.993:1219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc: denied { status } for auid=n/a uid=0 gid=0 path="/run/systemd/system/docker-6e9b6620d5da204c6d6e337fa525648627a24e1ef2ca77e4a1cb8aa7adbb018c.scope" cmdline="/usr/lib/systemd/systemd-machined" scontext=system_u:system_r:systemd_machined_t:s0 tcontext=system_u:object_r:container_unit_file_t:s0 tclass=service exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?' type=AVC msg=audit(1506214866.247:1301): avc: denied { transition } for pid=32908 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="vda2" ino=37857716 scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:svirt_t:s0:c40,c543 tclass=process Setting the selinux to permissive on compute node(s) makes it possible to launch an instance.
We need the whole audit.log from a permissive run here - the call trace is important to determine what is going on.
Okay, so the grep there is from the iptables-service package: grep -qIsE "^install[[:space:]]+${_IPV}[[:space:]]+/bin/(true|false)" /etc/modprobe.conf /etc/modprobe.d/* So, starting ip6tables causes it to grep everything in /etc/modprobe.d/*. This needs an audit rule, but I don't think this is causing instances to fail to launch.
spc_t is super-privileged container: https://danwalsh.livejournal.com/74754.html So, this is likely our openstack-nova-libvirt-docker (which contains libvirt and qemu-kvm[-rhev]) trying to launch an svirt instance in the host.
Created attachment 1331220 [details] audit.log from compute.
https://paste.fedoraproject.org/paste/xCqXruE6w-4yem-HimjZfw Fix for the domain transition issue; will be fixed in container-selinux 2.25
Included workarounds here: openstack-selinux-0.8.10-0.20170922165741.52b3fe8.4.el7ost This includes these two patches (until we have a new container-selinux build), and various other rules to allow operation of spc_t -> svirt_t transition and support: https://github.com/projectatomic/container-selinux/commit/947f3a996bd154b524d69e0ea094f09e098b6fad https://github.com/projectatomic/container-selinux/commit/efaffbd8fa3f304c2b90d36c32d35fd9b03860bf For this patch to go upstream to openstack-selinux, RDO needs to add container-selinux to -common at at least version 2.21, or I will have to make some sort of RHEL detection in the RDO master and Pike spec files to avoid using interfaces from container-selinux.
Note that changes in container-selinux are still forthcoming; these workarounds should work until they are complete, at which point, they can hopefully be discarded.
> RDO needs to add container-selinux to -common at at least version 2.21 container-selinux is in Extras repo, latest on CentOS 7.4 machine is: Updating: container-selinux noarch 2:2.21-2.gitba103ac.el7 extras 29 k built 2017-09-21 > or I will have to make some sort of RHEL detection in the RDO master and Pike spec files to avoid using interfaces from container-selinux. no need
Excellent.
Verified Environment: openstack-selinux-0.8.11-0.20171103171618.ce13ba7.el7ost.noarch The issue doesn't reproduce. Was able to launch instances without disabling selinux on compute nodes.
My issue was related to an overcloud customization that changed SELinux permissions of /etc/ld.so.cache. "restorecon -v /etc/ld.so.cache" fixed the issue. So confirming that the bug is fixed for me.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3462