Description of problem: On a fresh RHOSP12 deployment, no VM can't be launched 2018-01-25 14:15:17.466 1 ERROR nova.compute.manager [instance: f91eb6ab-7da0-460a-83e6-650d6cf5015e] libvirtError: internal error: process exited while connecting to monitor: libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied 2018-01-25 14:15:17.466 1 ERROR nova.compute.manager [instance: f91eb6ab-7da0-460a-83e6-650d6cf5015e] Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1.Deploy an overcloud 2.Try spawning a VM 3. Actual results: Fails Expected results: Succeeds Additional info:
This appears to be happening if "--libvirt-type qemu" is specified ... Something's wrong if the virtualization type we're trying to use is not "kvm"
If I deploy with "openstack overcloud deploy --libvirty-type kvm" and configure nested virtualization , everything seems to be working as expected as I can deploy a VM on that virtual compute. If I use "openstack overlcoud deploy -libvirt-type qemu", then I get the "permission denied" while trying to call /usr/libexec/qemu-kvm. If "--libvirt-type qemu" is no longer supported, perhaps a deprecation message should be displayed or even, remove that argument altogether.
Created attachment 1389754 [details] instance-00000002.log
()[root@overcloud-compute-0 /]# rpm -qa | grep libvirt libvirt-daemon-driver-lxc-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-storage-core-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-storage-rbd-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-nodedev-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-config-nwfilter-3.2.0-14.el7_4.7.x86_64 libvirt-libs-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-network-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-qemu-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-storage-iscsi-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-storage-logical-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-storage-scsi-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-storage-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-secret-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-kvm-3.2.0-14.el7_4.7.x86_64 libvirt-client-3.2.0-14.el7_4.7.x86_64 libvirt-python-3.2.0-3.el7_4.1.x86_64 libvirt-daemon-driver-nwfilter-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-storage-mpath-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-storage-disk-3.2.0-14.el7_4.7.x86_64 libvirt-daemon-driver-interface-3.2.0-14.el7_4.7.x86_64 ()[root@overcloud-compute-0 /]# rpm -qa | grep qemu qemu-kvm-common-rhev-2.9.0-16.el7_4.13.x86_64 qemu-kvm-rhev-2.9.0-16.el7_4.13.x86_64 libvirt-daemon-driver-qemu-3.2.0-14.el7_4.7.x86_64 qemu-img-rhev-2.9.0-16.el7_4.13.x86_64 ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
Created attachment 1389790 [details] libvirtd.log
I do have this in the host audit logs: type=AVC msg=audit(1517519328.362:302): avc: denied { entrypoint } for pid=29046 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="vda3" ino=13844771 scontext=system_u:system_r:svirt_tcg_t:s0:c347,c689 tcontext=system_u:object_r:container_share_t:s0 tclass=file type=AVC msg=audit(1517521309.159:405): avc: denied { entrypoint } for pid=50806 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="vda3" ino=13844771 scontext=system_u:system_r:svirt_tcg_t:s0:c642,c778 tcontext=system_u:object_r:container_share_t:s0 tclass=file type=AVC msg=audit(1517521639.777:488): avc: denied { entrypoint } for pid=51929 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="vda3" ino=13844771 scontext=system_u:system_r:svirt_tcg_t:s0:c328,c939 tcontext=system_u:object_r:container_share_t:s0 tclass=file
Setting selinux in permissive solves this problem so it confirms something is or wrong in the configuration or the selinux policies must be updated to allow it.
Just to make sure I understand this correctly - using nested virtualisation to boot tenant VMs on a virtual compute works fine, but using qemu emulation for that same purpose fails with selinux errors? Could we see nova.conf from the compute node, as deployed when you use `openstack overcloud deploy --libvirty-type kvm`? Normally we should see virt_type=kvm. If you then switch that to qemu, do we hit the same selinux permissions issues?
Assuming my previous comment #9 is correct in its description of the issue, the compute DFG would like to keep this open to double check that CI isn't missing anything. All of our CI runs with qemu emulation, and so should have picked up not being able to launch VMs fairly quickly. That being said, the priority and severity are being lowered, since nothing is on fire and we just want to make sure we're not doing something stupid like running CI in permissive mode, for example.
Using nested virtualisation works as expected but when using qemu emulation, launching VMs fails with the AVC denied previously pasted above. The nova.conf difference between both is virt_type=qemu or virt_type=kvm This is how I troubleshooted this issue: 1) Tried to launch a VM but it failed with virt_type=qemu seeing the selinux denied AVC 2) Configured nested virtualistion on the guest with "options kvm_intel nested=1" 3) Stopped/Started the VM to apply the change 4) grep vmx /proc/cpuinfo # make sure it is seen 5) Tried to launch a VM but it failed again with virt_type=qemu seeing the selinux denied AVC 6) Changed virt_type to kvm 7) Restarted the container 8) Tried to launch a VM and it succeeded I'll try reproducing this issue now by reverting virt_type=kvm to virt_type=qemu and confirm if we still see the selinux AVC denied but this should remain the same as SELinux policies are not generated on container boot (are they?)
The key difference beteen kvm and qemu is that the former runs with svirt_t while the latter rnus with svirt_tcg_t context. The AVC here type=AVC msg=audit(1517519328.362:302): avc: denied { entrypoint } for pid=29046 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="vda3" ino=13844771 scontext=system_u:system_r:svirt_tcg_t:s0:c347,c689 tcontext=system_u:object_r:container_share_t:s0 tclass=file would correspond to a allow rule: allow svirt_tcg_t container_share_t:file entrypoint; Since you claim KVM works, I expect the policy already has allow svirt_t container_share_t:file entrypoint; and we just need to add the corresponding rule for svirt_tcg_t too.
While neither nested virt nor (I think) qemu emulation are officially supported in our product, we would definitely like qemu emulation to work, if only for GSS and QE. Looking at Dan's comment #12, and at the contents of openstack-selinux, I think this bug needs to be retargeted to selinux-policy.
Hi Lukas, A gentle ping to see if someone is already working on this. This bug came up again elsewhere again: https://bugzilla.redhat.com/show_bug.cgi?id=1598426 ("[OSP14] [standalone openstack] cannot launch instance with enable selinux for all-in-one installation - libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied")
*** Bug 1598426 has been marked as a duplicate of this bug. ***
As suggested in comment 12 in os-nova.te we do indeed have: allow svirt_t container_share_t:file { entrypoint execute }; We probably want to replicate a bunch of those rules for svirt_tcg_t.
*** Bug 1582730 has been marked as a duplicate of this bug. ***
Neither rule is in upsteam SELInux-policy. sesearch -A -s svirt_t -p entrypoint -c file allow svirt_t qemu_exec_t:file { entrypoint execute getattr ioctl lock map open read }; Adding rules to allow entrypoint from container_file_t and container_share_t LGTM
I wonder if the root cause here is not the selinux policy, but rather than context that libvirtd is running under. We've had another bug recently where things broke because libvirtd was not running in the normal "virtd_t" type. I think we should fix the context that libvirtd is running as to "virtd_t" which is what it is tested with for non-container scenarios, otherwise we'll keep hitting more of these kind of bugs.
Well the entrypoint AVC's have nothing to do with the source process label. They basically state the svirt_t can only be entered currently via qemu_exec_t types. No mater the source context attempting to launch the qemu process.
Oh right yes, so this is complaining because /usr/libexec/qemu-kvm does not get given the usual file context it would have outside the container. Adding the extra entrypoint doesn't loosen security too much, as libvirtd is the one who'll be spawning QEMU and its already trusted.
Right the ability to launch a process with a certain type requires multiple access one of them being entrypoint. But we also control transition. > sesearch -A -t svirt_t -p transition -c process allow container_runtime_t virt_domain:process { sigkill signal signull sigstop transition }; allow spc_t virt_domain:process { sigkill signal signull sigstop transition }; allow staff_t virt_domain:process { sigkill signal signull sigstop transition }; allow unconfined_service_t virt_domain:process { sigkill signal signull sigstop transition }; allow unconfined_t domain:process transition; allow unconfined_t virt_domain:process { sigkill signal signull sigstop transition }; allow user_t virt_domain:process { sigkill signal signull sigstop transition }; allow virtd_t virt_domain:process { getattr getsched setsched sigkill signal signull transition }; So these are the domains that can currently transition to svirt_t type.
I could reproduce on a fresh install of OSP13 as well.
May be partially resolved by: https://github.com/redhat-openstack/openstack-selinux/pull/19
Hello Vincent, did the deployment inculde this: https://github.com/redhat-openstack/openstack-selinux/pull/19 Also I would like to to ask: was it on RHEL 7.6 with current selinux policy/package? Thanks Zoli Caplovic
The openstack-selinux-0.8.15-1 package should fix most of this.
Hi Zoli, No, this was with RHEL 7.5. I guess you can drop my needinfo. Thank you,