Description of problem: After a minor update of RHOSP 16 environment (16.0.2 → 16.1), we cannot spawn new instances with SELinux in Enforcing mode. Here are some denials found in audit.log: ~~~ type=AVC msg=audit(1596552157.909:578): avc: denied { entrypoint } for pid=8860 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="overlay" ino=137413 scontext=system_u:syst em_r:svirt_t:s0:c141,c914 tcontext=system_u:object_r:container_file_t:s0:c143,c388 tclass=file permissive=0 type=AVC msg=audit(1596555374.210:1689): avc: denied { entrypoint } for pid=18428 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="overlay" ino=137413 scontext=system_u:sy stem_r:svirt_t:s0:c316,c469 tcontext=system_u:object_r:container_file_t:s0:c143,c388 tclass=file permissive=1 type=AVC msg=audit(1596555374.210:1689): avc: denied { read write } for pid=18428 comm="qemu-kvm" path="/dev/mapper/control" dev="devtmpfs" ino=11765 scontext=system_u:syst em_r:svirt_t:s0:c316,c469 tcontext=system_u:object_r:lvm_control_t:s0 tclass=chr_file permissive=1 type=AVC msg=audit(1596555374.210:1689): avc: denied { read execute } for pid=18428 comm="qemu-kvm" path="/usr/libexec/qemu-kvm" dev="overlay" ino=137413 scontext=system_u: system_r:svirt_t:s0:c316,c469 tcontext=system_u:object_r:container_file_t:s0:c143,c388 tclass=file permissive=1 type=AVC msg=audit(1596555374.225:1690): avc: denied { open } for pid=18428 comm="qemu-kvm" path="/etc/ld.so.cache" dev="overlay" ino=117069 scontext=system_u:system_r:svir t_t:s0:c316,c469 tcontext=system_u:object_r:container_file_t:s0:c143,c388 tclass=file permissive=1 type=AVC msg=audit(1596555374.225:1691): avc: denied { read } for pid=18428 comm="qemu-kvm" name="lib64" dev="overlay" ino=117065 scontext=system_u:system_r:svirt_t:s0:c316 ,c469 tcontext=system_u:object_r:container_file_t:s0:c143,c388 tclass=lnk_file permissive=1 type=AVC msg=audit(1596555374.525:1692): avc: denied { read } for pid=18428 comm="qemu-kvm" name="/" dev="overlay" ino=116647 scontext=system_u:system_r:svirt_t:s0:c316,c46 9 tcontext=system_u:object_r:container_file_t:s0:c143,c388 tclass=dir permissive=1 type=AVC msg=audit(1596562587.232:1911): avc: denied { entrypoint } for pid=20925 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="overlay" ino=144355 scontext=system_u:sy stem_r:svirt_t:s0:c970,c979 tcontext=system_u:object_r:container_file_t:s0:c143,c388 tclass=file permissive=0 type=AVC msg=audit(1596563775.829:2316): avc: denied { entrypoint } for pid=24507 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="overlay" ino=144355 scontext=system_u:sy stem_r:svirt_t:s0:c337,c866 tcontext=system_u:object_r:container_file_t:s0:c143,c388 tclass=file permissive=0 ~~~ The traceback from nova looks like this: ~~~ Instance failed to spawn: libvirt.libvirtError: internal error: process exited while conne cting to monitor: libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 2663, in _build_resources yield resources File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 2437, in _build_and_run_instance block_device_info=block_device_info) File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3647, in spawn cleanup_instance_disks=created_disks) File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 6473, in _create_domain_and_network cleanup_instance_disks=cleanup_instance_disks) File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__ self.force_reraise() File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in force_reraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise raise value File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 6439, in _create_domain_and_network post_xml_callback=post_xml_callback) File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 6368, in _create_domain guest.launch(pause=pause) File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 143, in launch self._encoded_xml, errors='ignore') File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__ self.force_reraise() File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in force_reraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise raise value File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 138, in launch return self._domain.createWithFlags(flags) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute six.reraise(c, e, tb) File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise raise value File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker rv = meth(*args, **kwargs) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1265, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) ibvirt.libvirtError: internal error: process exited while connecting to monitor: libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied ~~~ This looks very similar to Bug 1841822. If this is indeed related to a bug in podman 1.6.4, we can mark this as a duplicate and proceed with 1841822. Though might need some extra checks. Version-Release number of selected component (if applicable): openstack-selinux-0.8.20-0.20200428133425.3300746.el8ost.noarch podman-1.6.4-12.module+el8.2.0+6669+dde598ec.x86_64 How reproducible: Feel free to ask for any additional inputs which might be helpful to triage the issue. Thanks
This looks exactly like bug 1841822 indeed... /usr/libexec/qemu-kvm should not have the container_file_t label. I think the podman version may be too old, looking at one of the dependent bugs for podman (bug 1846364) it seems like we need podman podman-1.6.4-15 at least. Cedric, I'm adding you as needinfo just in case you spot anything else missing - there were a few moving parts with that other bug. Thank you!
Hello, you appear to have the wrong podman version, it should be 1.6.4-15. So I'd say "duplicate", indeed. If you can update your podman version and restart the containers (or the hosts directly), you should be good. Cheers, C.
1.6.4-15 should be in the container-tools:2.0 module stream, which should be enabled by default (as opposed to the default module, which shouldn't be used.)
Thanks for the investigation! It didn't get update during 16.0.2 -> 16.1 upgrade. Either there is an issue with the capsule server we use or the module stream is wrong. @Julie: Who should enable container-tools:2.0? This is what I currently have: --- # yum module list container-tools Updating Subscription Management repositories. /usr/lib/python3.6/site-packages/dateutil/parser/_parser.py:70: UnicodeWarning: decode() called on unicode string, see https://bugzilla.redhat.com/show_bug.cgi?id=1693751 instream = instream.decode() Fast Datapath for RHEL 8 x86_64 (RPMs) 27 kB/s | 2.4 kB 00:00 Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support (RPMs) 25 kB/s | 2.4 kB 00:00 Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Support (RPMs) 35 kB/s | 2.8 kB 00:00 Red Hat Enterprise Linux 8 for x86_64 - High Availability - Extended Update Support (RPMs) 23 kB/s | 2.4 kB 00:00 Advanced Virtualization for RHEL 8 x86_64 (RPMs) 24 kB/s | 2.8 kB 00:00 Red Hat Satellite Tools 6.5 for RHEL 8 x86_64 (RPMs) 17 kB/s | 2.1 kB 00:00 Red Hat Ansible Engine 2.9 for RHEL 8 x86_64 (RPMs) 22 kB/s | 2.4 kB 00:00 Red Hat OpenStack Platform 16.1 for RHEL 8 x86_64 (RPMs) 18 kB/s | 2.4 kB 00:00 Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Support (RPMs) Name Stream Profiles Summary container-tools rhel8 [d][e] common [d] Common tools and dependencies for container runtimes container-tools 1.0 common [d] Common tools and dependencies for container runtimes container-tools 2.0 common [d] Common tools and dependencies for container runtimes Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
I think this should have been resolved as part of bug 1829609... Lukas, you were looking at that other bug: it looks like the correct module (container-tools:2.0) wasn't enabled as expected during a 16.0.2 to 16.1 update. Do you have any ideas?
It looks like the bug I linked to is about upgrades, not updates. I think the correct module stream might need to be enabled manually when setting up the new repos.
Priscila, with regard to the new case you linked, Cedric described the workaround in comment 2: update podman and restart the containers. You may need to enable the correct module stream first in order to get the right podman version.
Is container-tools the only module stream which needs to be changed or are there others as well?
I suspect virt:8.2 needs to be enabled as well if it isn't. It seems like there's a need to update the 16.1 documentation...
I opened 1866479 to track the wrong module being setup while we debug the libvirt issue here.
The nova_libvirt container needs to be replaced. Steps from Cedric, to run *on the compute* as root: # dnf module disable -y container-tools:rhel8 # dnf module enable -y container-tools:2.0 # dnf upgrade -y podman # systemctl disable --now tripleo_nova_libvirt # podman rm nova_libvirt # paunch apply --file /var/lib/tripleo-config/container-startup-config/step_3/nova_libvirt.json --config-id step_3 # systemctl enable tripleo_nova_libvirt Then we get a fresh container with the correct labels, and VMs can be started.
As an additional precaution, we should also add the following 2 commands to the workaround above, to avoid other potential issues in the future: # dnf module disable virt:rhel # dnf module enable virt:8.2