Bug 1018083 - SElinux preventing VM startup after upgrading to qemu-system-x86-1.6.0-10.fc20.x86_64
Summary: SElinux preventing VM startup after upgrading to qemu-system-x86-1.6.0-10.fc2...
Alias: None
Product: Fedora
Classification: Fedora
Component: qemu
Version: 20
Hardware: x86_64
OS: Linux
Target Milestone: ---
Assignee: Fedora Virtualization Maintainers
QA Contact: Fedora Extras Quality Assurance
: 993541 (view as bug list)
Depends On:
TreeView+ depends on / blocked
Reported: 2013-10-11 07:38 UTC by Rolf Fokkens
Modified: 2013-11-16 15:10 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2013-11-05 23:55:32 UTC

Attachments (Terms of Use)
The requested log file (37.15 KB, text/plain)
2013-10-11 09:15 UTC, Rolf Fokkens
no flags Details
A guest config resulting in the reported issue (2.91 KB, text/xml)
2013-10-12 14:08 UTC, Rolf Fokkens
no flags Details
Another guest config resulting in the reported issue (2.96 KB, text/xml)
2013-10-12 14:09 UTC, Rolf Fokkens
no flags Details

Description Rolf Fokkens 2013-10-11 07:38:08 UTC
Description of problem:
VM's won't start after upgrading to qemu-system-x86-1.6.0-10.fc20.x86_64. After setting selinux to permissive the problem is 'solved'.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. yum update
2. start VM
3. Note that it won't start

Actual results:
VM doesn't start

Expected results:
VM starts

Additional info:
Oct 11 09:28:53 home07 setroubleshoot: SELinux is preventing /usr/bin/qemu-system-x86_64 from using the execmem access on a process. For complete SELinux messages. run sealert -l dbd37d1e-53b9-409d-90eb-66ee8afa673b

[root@home07 ~]# unset LANG
[root@home07 ~]# sealert -l dbd37d1e-53b9-409d-90eb-66ee8afa673b
SELinux is preventing /usr/bin/qemu-system-x86_64 from using the execmem access on a process.

*****  Plugin catchall_boolean (89.3 confidence) suggests   ******************

If you want to allow virt to use execmem
Then you must tell SELinux about this by enabling the 'virt_use_execmem' boolean.
You can read 'None' man page for more details.
setsebool -P virt_use_execmem 1

*****  Plugin catchall (11.6 confidence) suggests   **************************

If you believe that qemu-system-x86_64 should be allowed execmem access on processes labeled svirt_t by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
allow this access for now by executing:
# grep qemu-system-x86 /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp

Additional Information:
Source Context                system_u:system_r:svirt_t:s0:c410,c622
Target Context                system_u:system_r:svirt_t:s0:c410,c622
Target Objects                 [ process ]
Source                        qemu-system-x86
Source Path                   /usr/bin/qemu-system-x86_64
Port                          <Unknown>
Host                          home07.rolf-en-monique.lan
Source RPM Packages           qemu-system-x86-1.6.0-10.fc20.x86_64
Target RPM Packages           
Policy RPM                    selinux-policy-3.12.1-84.fc20.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Permissive
Host Name                     home07.rolf-en-monique.lan
Platform                      Linux home07.rolf-en-monique.lan
                              3.11.3-301.fc20.x86_64 #1 SMP Thu Oct 3 00:57:21
                              UTC 2013 x86_64 x86_64
Alert Count                   1
First Seen                    2013-10-11 09:28:51 CEST
Last Seen                     2013-10-11 09:28:51 CEST
Local ID                      dbd37d1e-53b9-409d-90eb-66ee8afa673b

Raw Audit Messages
type=AVC msg=audit(1381476531.923:557): avc:  denied  { execmem } for  pid=3380 comm="qemu-system-x86" scontext=system_u:system_r:svirt_t:s0:c410,c622 tcontext=system_u:system_r:svirt_t:s0:c410,c622 tclass=process

type=SYSCALL msg=audit(1381476531.923:557): arch=x86_64 syscall=mmap success=yes exit=139778582290432 a0=7f20bcbea000 a1=40000 a2=7 a3=812 items=0 ppid=1 pid=3380 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 ses=4294967295 tty=(none) comm=qemu-system-x86 exe=/usr/bin/qemu-system-x86_64 subj=system_u:system_r:svirt_t:s0:c410,c622 key=(null)

Hash: qemu-system-x86,svirt_t,svirt_t,process,execmem

[root@home07 ~]#

Comment 1 Daniel Berrangé 2013-10-11 08:32:56 UTC
Hmm 'execmem' should only be required in TCG mode, and if libvirt had expected tcg mode it would have used svirt_tcg_t.

Please provide the /var/log/libvirt/qemu/GUESTNAME.log  file

Comment 2 Rolf Fokkens 2013-10-11 09:15:27 UTC
Created attachment 810923 [details]
The requested log file

Comment 3 Daniel Berrangé 2013-10-11 09:18:08 UTC
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name F20test -S -machine pc-i440fx-1.6,accel=kvm,usb=off ...snip....

So it is definitely being launched with KVM. So if something is still requiring execme, this looks like a (potentially serious) bug in QEMU.

Comment 4 Richard W.M. Jones 2013-10-11 09:40:19 UTC
There's no way to get a stack trace when the SELinux event
happens?  Just thinking aloud here ..  Presumably the system
call fails with -EPERM and qemu exits (ie it doesn't abort).
I can't think of an way except something like systemtap or
an LD_PRELOAD library that would call abort on the required

Comment 5 Daniel Berrangé 2013-10-11 10:32:51 UTC
When in permissive mode, can you run the following command on your running guest

# virsh qemu-monitor-command $GUESTNAME --hmp 'info kvm'

to just confirm it is using KVM.

Comment 6 Rolf Fokkens 2013-10-11 10:47:47 UTC
[rolf@home07 ~]$ virsh qemu-monitor-command F20test --hmp 'info kvm'
kvm support: enabled

[rolf@home07 ~]$

Comment 7 Daniel Berrangé 2013-10-11 10:54:45 UTC
I installed 1.6.0-10 on my f19 system and can't reproduce the problem. Can you tell me what kernel & libvirt versions you have installed. Also provide the XML config for the guest in question. Does it fail with all guests, or just one ?

Also, can you look in /var/log/yum.log to find out what previous version of QEMU you had installed, which worked correctly.

Comment 8 Rolf Fokkens 2013-10-12 13:59:01 UTC
[root@home07 ~]# grep qemu /var/log/yum.log 
Sep 15 22:32:44 Installed: ipxe-roms-qemu-20130517-3.gitc4bce43.fc20.noarch
Sep 15 22:35:42 Installed: 2:qemu-img-1.6.0-7.fc20.x86_64
Sep 15 22:35:48 Installed: 2:qemu-common-1.6.0-7.fc20.x86_64
Sep 15 22:46:18 Installed: libvirt-daemon-driver-qemu-1.1.2-1.fc20.x86_64
Sep 15 22:46:52 Installed: 2:qemu-system-x86-1.6.0-7.fc20.x86_64
Sep 15 22:46:54 Installed: 2:qemu-kvm-1.6.0-7.fc20.x86_64
Sep 15 22:59:07 Installed: 2:qemu-guest-agent-1.6.0-7.fc20.x86_64
Sep 25 14:48:43 Updated: libvirt-daemon-driver-qemu-1.1.2-3.fc20.x86_64
Sep 26 20:59:51 Updated: 2:qemu-common-1.6.0-8.fc20.x86_64
Sep 26 20:59:54 Updated: 2:qemu-system-x86-1.6.0-8.fc20.x86_64
Sep 26 20:59:57 Updated: 2:qemu-kvm-1.6.0-8.fc20.x86_64
Sep 26 21:00:08 Updated: 2:qemu-img-1.6.0-8.fc20.x86_64
Sep 26 21:00:16 Updated: libvirt-daemon-driver-qemu-1.1.2-4.fc20.x86_64
Sep 26 21:00:55 Updated: 2:qemu-guest-agent-1.6.0-8.fc20.x86_64
Oct 02 19:02:43 Updated: libvirt-daemon-driver-qemu-1.1.3-1.fc20.x86_64
Oct 08 23:08:41 Updated: 2:qemu-img-1.6.0-9.fc20.x86_64
Oct 08 23:08:48 Updated: libvirt-daemon-driver-qemu-1.1.3-2.fc20.x86_64
Oct 08 23:09:04 Updated: 2:qemu-common-1.6.0-9.fc20.x86_64
Oct 08 23:09:07 Updated: 2:qemu-system-x86-1.6.0-9.fc20.x86_64
Oct 08 23:09:10 Updated: 2:qemu-kvm-1.6.0-9.fc20.x86_64
Oct 08 23:09:34 Updated: 2:qemu-guest-agent-1.6.0-9.fc20.x86_64
Oct 11 09:12:11 Updated: 2:qemu-common-1.6.0-10.fc20.x86_64
Oct 11 09:12:14 Updated: 2:qemu-system-x86-1.6.0-10.fc20.x86_64
Oct 11 09:12:17 Updated: 2:qemu-kvm-1.6.0-10.fc20.x86_64
Oct 11 09:12:32 Updated: 2:qemu-img-1.6.0-10.fc20.x86_64
Oct 11 09:12:34 Updated: 2:qemu-guest-agent-1.6.0-10.fc20.x86_64
[root@home07 ~]#

Comment 9 Rolf Fokkens 2013-10-12 14:04:00 UTC
Package versions:

before Oct 11 I had no issues, so that must have been qemu 1.6.0-8. I'm pretty sure I used that (between Oct 8 and Oct 11).

Comment 10 Rolf Fokkens 2013-10-12 14:06:57 UTC
I'm running two guests, both showing the same symptoms when started.

Comment 11 Rolf Fokkens 2013-10-12 14:08:34 UTC
Created attachment 811573 [details]
A guest config resulting in the reported issue

Comment 12 Rolf Fokkens 2013-10-12 14:09:50 UTC
Created attachment 811574 [details]
Another guest config resulting in the reported issue

Comment 13 Cole Robinson 2013-10-31 22:41:07 UTC
Rolf, still reproducing with latest F20? I can't seem to trigger this.

if so, can you enable selinux, reproduce, and post /var/log/libvirt/qemu/$vmname.log for the offending VM?

Also, can you confirm whether the issue reproduces after "sudo yum downgrade qemu\*" (and note which versions are downgraded to)

Comment 14 Rolf Fokkens 2013-11-01 08:53:33 UTC
The problem that I had was based on a separate F20 install (on a 80GB HDD I had lying around). My current F20 installation is the upgraded F19 system I used before. I could try to reproduce with the 80GB disk, but before doing that I'll try to reproduce on the current system. This results in a new error:

2013-11-01 08:44:50.690+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name Fedora-20-SSD-test3 -S -machine pc-i440fx-1.4,accel=kvm,usb=off -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 2c6c12af-1053-4104-8347-d8968355e2ac -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/Fedora-20-SSD-test3.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/dev/BCACHE/F20-T3-HDD1,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=2 -drive file=/dev/BCACHE/F20-T3-HDD2,if=none,id=drive-scsi0-0-0-1,format=raw,cache=none,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1 -drive file=/dev/BCACHE/F20-T3-HDD3,if=none,id=drive-scsi0-0-0-2,format=raw,cache=none,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi0-0-0-2,id=scsi0-0-0-2 -drive file=/dev/SSD/F20-T3-SSD1,if=none,id=drive-scsi0-0-0-3,format=raw,cache=none,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi0-0-0-3,id=scsi0-0-0-3 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,cache=none -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:e8:14:b4,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
/usr/bin/qemu-system-x86_64: error while loading shared libraries: libGL.so.1: failed to map segment from shared object: Permission denied
2013-11-01 08:44:50.958+0000: shutting down

Looks like a NVIDIA driver / SELinux issue.

Comment 15 Rolf Fokkens 2013-11-01 09:58:26 UTC
In syslog I see the following messages:

Nov  1 09:42:45 home07 kernel: [    2.740939] type=1404 audit(1383295357.421:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295
Nov  1 09:44:34 home07 kernel: [  119.597276] type=1400 audit(1383295474.277:6): avc:  denied  { execmem } for  pid=3166 comm="qemu-system-x86" scontext=system_u:system_r:svirt_t:s0:c575,c983 tcontext=system_u:system_r:svirt_t:s0:c575,c983 tclass=process

So there' still the execmem issue. But the libGL issue kind of surprises me. Can't find an easy way to get around it so I can reproduces the original issue. xorg-x11-drv-nvidia-libs (I know, rpmsfusion, so no formal support?) includes /usr/lib64/nvidia/libGL.so.1 which is on the library path by means of /etc/ld.so.conf.d/nvidia-lib64.conf.

Is there an selinux command to allow access? 

But... why does the VM try to load libGL.so?

Comment 16 Cole Robinson 2013-11-01 13:27:09 UTC
(In reply to Rolf Fokkens from comment #15)
> But... why does the VM try to load libGL.so?

My guess is its just some side effect of linking against SDL or spice or some other graphics related bit.

nvidia does some crazy stuff with their packages. Please try uninstalling all nvidia stuff and see if you can still reproduce using nouveau or similar. Obviously that's not a permanent solution for you, but it will tell us if its qemu's fault or not.

Comment 17 Rolf Fokkens 2013-11-03 22:30:22 UTC
After removing all nividia stuff the VM wat able to start, even with selinux enabled. The kvm process used the following libs:

[root@home07 ~]# awk '{ if ($6 ~ "/.*") print $6}' kvm.maps  | sort -u

Comment 18 Cole Robinson 2013-11-05 23:55:32 UTC
I'm going to chalk this up to nvidia library replacement craziness then, which means its out of our hands, so closing as CANTFIX. 

If you want qemu to work with selinux enabled, you can do:

sudo setsebool -P virt_use_execmem 1

but note that reduces security.

Comment 19 Rolf Fokkens 2013-11-06 06:47:42 UTC
Opened a bug at rpmfusion on this:


Comment 20 Nicolas Chauvet (kwizart) 2013-11-16 15:10:14 UTC
*** Bug 993541 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.