Bug 1269975 - svirt very occasionally prevents parallel libvirt access to 'kernel' file
Summary: svirt very occasionally prevents parallel libvirt access to 'kernel' file
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: libvirt
Version: 23
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 871196 (view as bug list)
Depends On:
Blocks: TRACKER-bugs-affecting-libguestfs 910270 921135 922891
TreeView+ depends on / blocked
 
Reported: 2015-10-08 16:30 UTC by Richard W.M. Jones
Modified: 2016-01-24 03:30 UTC (History)
17 users (show)

Fixed In Version: libvirt-1.2.18.2-2.fc23
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-24 03:30:07 UTC
Type: Bug


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 921135 None None None Never

Internal Links: 921135

Description Richard W.M. Jones 2015-10-08 16:30:22 UTC
Description of problem:

This test:
https://github.com/libguestfs/libguestfs/blob/master/align/test-virt-alignment-scan-guests.sh
which essentially starts up lots of parallel libvirt instances,
fails very occasionally with an SELinux alert.

time->Thu Oct  8 17:24:57 2015
type=PROCTITLE msg=audit(1444321497.797:12874): proctitle=2F7573722F62696E2F71656D752D73797374656D2D7838365F3634002D6D616368696E6500616363656C3D6B766D002D6E616D6500677565737466732D6D62676162697062326D7A633136636A002D53002D6D616368696E650070632D6934343066782D322E332C616363656C3D6B766D2C7573623D6F6666002D6370750068
type=SYSCALL msg=audit(1444321497.797:12874): arch=c000003e syscall=2 success=no exit=-13 a0=56100ea381f0 a1=0 a2=1b6 a3=0 items=0 ppid=1 pid=27931 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=(none) ses=1 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" subj=unconfined_u:unconfined_r:svirt_t:s0:c27,c595 key=(null)
type=AVC msg=audit(1444321497.797:12874): avc:  denied  { read } for  pid=27931 comm="qemu-system-x86" name="kernel" dev="sdb1" ino=5942072 scontext=unconfined_u:unconfined_r:svirt_t:s0:c27,c595 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0

audit2allow suggests:

#============= svirt_t ==============
allow svirt_t user_home_t:file read;

The kernel file, literally called "kernel" is indeed located
in my home directory.

This is very difficult to reproduce on cue, and seems to be some
kind of race in libvirt.  Most of the time it works fine.

Version-Release number of selected component (if applicable):

selinux-policy-3.13.1-128.12.fc22.noarch
libvirt-1.2.20-1.fc24.x86_64

How reproducible:

Very rare.

Steps to Reproduce:
1. Run the libguestfs test suite, in the align/ subdirectory.

Comment 1 Miroslav Grepl 2015-10-12 17:37:45 UTC
Is this file shared by more virtual machines?

Comment 2 Richard W.M. Jones 2015-10-12 17:41:43 UTC
I believe so, yes.

Although only briefly: qemu loads the kernel file when it starts up,
and probably doesn't touch it at all after that.  In this test we are
starting up lots of qemu processes in parallel.

Comment 3 Richard W.M. Jones 2015-10-12 17:44:10 UTC
The qemu command line would be something like below.  The -kernel
parameter points to this file (it may have varying locations, including
under the $HOME directory if building libguestfs from source).
The file might be shared by multiple instances of qemu.  And libvirt
is likely doing some labelling here too.

/usr/bin/qemu-system-x86_64 -machine accel=kvm -name guestfs-i12y68tb1oxdtfvd -S -machine pc-i440fx-2.3,accel=kvm,usb=off -cpu host -m 500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid b8ff8fa3-c153-4adb-adf3-8cee828338d9 -nographic -no-user-config -nodefaults -device sga -chardev socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib/domain-guestfs-i12y68tb1oxdtfvd/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -no-acpi -boot strict=on -kernel /var/tmp/.guestfs-1000/appliance.d/kernel -initrd /var/tmp/.guestfs-1000/appliance.d/initrd -append panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=xterm-256color -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive file=/tmp/libguestfsztxHGG/devnull1,if=none,id=drive-scsi0-0-0-0,format=raw,cache=writeback -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive file=/tmp/libguestfsztxHGG/overlay2,if=none,id=drive-scsi0-0-1-0,format=qcow2,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0 -chardev socket,id=charserial0,path=/tmp/libguestfsztxHGG/console.sock -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/tmp/libguestfsztxHGG/guestfsd.sock -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on

Comment 4 Cole Robinson 2015-10-15 20:51:15 UTC
Can you be more specific about how the test is launching and stopping VMs?

Could it be that:

- VM1 startup requested, labels kernel virt_content_t
- VM1 qemu is launched
- VM2 startup requested, labels kernel virt_content_t
- VM1 is shutdown, resets label of kernel to user_home_t
- VM2 qemu tries to launch, hits selinux avc

Libvirt's locking may prevent that for all I know, but I didn't look closely

Comment 5 Richard W.M. Jones 2015-10-16 07:53:39 UTC
It just runs virDomainCreateXML (in parallel).

https://github.com/libguestfs/libguestfs/blob/master/src/launch-libvirt.c#L547

(In reply to Cole Robinson from comment #4)
> Could it be that:
> 
> - VM1 startup requested, labels kernel virt_content_t
> - VM1 qemu is launched
> - VM2 startup requested, labels kernel virt_content_t
> - VM1 is shutdown, resets label of kernel to user_home_t
> - VM2 qemu tries to launch, hits selinux avc
> 
> Libvirt's locking may prevent that for all I know, but I didn't look closely

Quite probably.

Comment 6 Richard W.M. Jones 2015-11-20 10:26:02 UTC
This seems to happen even more frequently in Rawhide.

Comment 7 Cole Robinson 2015-11-21 01:18:07 UTC
If you're running a debug kernel, it could exacerbate the race

Comment 8 Cole Robinson 2016-01-15 19:32:48 UTC
Upstream fix:

commit 68acc701bd449481e3206723c25b18fcd3d261b7
Author: Jiri Denemark <jdenemar@redhat.com>
Date:   Fri Jan 15 10:55:58 2016 +0100

    security: Do not restore kernel and initrd labels

Comment 9 Richard W.M. Jones 2016-01-15 20:22:00 UTC
*** Bug 871196 has been marked as a duplicate of this bug. ***

Comment 10 Richard W.M. Jones 2016-01-15 20:24:36 UTC
I have tested this, so it's fine to close it once it goes
into Fedora.

For RHEL 7.3, there is bug 921135 tracking the same problem.

Comment 11 Fedora Update System 2016-01-21 17:51:50 UTC
libvirt-1.2.18.2-2.fc23 has been submitted as an update to Fedora 23. https://bodhi.fedoraproject.org/updates/FEDORA-2016-02dc87c44e

Comment 12 Fedora Update System 2016-01-22 04:55:56 UTC
libvirt-1.2.18.2-2.fc23 has been pushed to the Fedora 23 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-02dc87c44e

Comment 13 Fedora Update System 2016-01-24 03:29:18 UTC
libvirt-1.2.18.2-2.fc23 has been pushed to the Fedora 23 stable repository. If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.