Description of problem: Basic tests fail with: libguestfs: error: could not create appliance through libvirt: internal error process exited while connecting to monitor: qemu-system-x86_64: -drive file=/home/rjones/d/libguestfs/tmp/libguestfso7pUI1/snapshot1,if=none,id=drive-scsi0-0-1-0,format=qcow2,cache=unsafe: could not open disk image /home/rjones/d/libguestfs/tmp/libguestfso7pUI1/snapshot1: Permission denied Here are the AVCs: type=AVC msg=audit(1358447930.742:398): avc: denied { open } for pid=8061 comm="qemu-system-x86" path="/home/rjones/d/libguestfs/tmp/.guestfs-1000/root.7923" dev="dm-1" ino=548896 scontext=unconfined_u:system_r:svirt_t:s0:c221,c969 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=file type=SYSCALL msg=audit(1358447930.742:398): arch=c000003e syscall=2 success=no exit=-13 a0=7fffbe9afaf0 a1=80800 a2=0 a3=1 items=0 ppid=1 pid=8061 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 ses=1 tty=(none) comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" subj=unconfined_u:system_r:svirt_t:s0:c221,c969 key=(null) type=AVC msg=audit(1358447930.747:399): avc: denied { open } for pid=8061 comm="qemu-system-x86" path="/home/rjones/d/libguestfs/tmp/.guestfs-1000/root.7923" dev="dm-1" ino=548896 scontext=unconfined_u:system_r:svirt_t:s0:c221,c969 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=file type=SYSCALL msg=audit(1358447930.747:399): arch=c000003e syscall=2 success=no exit=-13 a0=7fffbe9afaf0 a1=80800 a2=0 a3=1 items=0 ppid=1 pid=8061 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 ses=1 tty=(none) comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" subj=unconfined_u:system_r:svirt_t:s0:c221,c969 key=(null) type=AVC msg=audit(1358447930.747:400): avc: denied { open } for pid=8061 comm="qemu-system-x86" path="/home/rjones/d/libguestfs/tmp/.guestfs-1000/root.7923" dev="dm-1" ino=548896 scontext=unconfined_u:system_r:svirt_t:s0:c221,c969 tcontext=unconfined_u:object_r:user_tmp_t:s0 tclass=file type=SYSCALL msg=audit(1358447930.747:400): arch=c000003e syscall=2 success=no exit=-13 a0=7fffbe9afaf0 a1=80000 a2=0 a3=48 items=0 ppid=1 pid=8061 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 ses=1 tty=(none) comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" subj=unconfined_u:system_r:svirt_t:s0:c221,c969 key=(null) type=USER_CMD msg=audit(1358447933.951:401): pid=8089 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/home/rjones/d/libguestfs" cmd=6C657373202F7661722F6C6F672F61756469742F61756469742E6C6F67 terminal=pts/0 res=success' Version-Release number of selected component (if applicable): selinux-policy-3.12.1-4.fc19.noarch How reproducible: 100% Steps to Reproduce: 1. Run libguestfs-test-tool with SELinux enabled.
audit2allow suggests: #============= svirt_t ============== allow svirt_t user_tmp_t:file open; I'm pretty sure we fixed this already in another bug.
Perhaps bug 860235 or bug 871749?
Well this is the same issue which we have in the #871749 bug.
What is the XML used for this guest ?
Created attachment 682316 [details] Output of libguestfs-test-tool Attached is the output of libguestfs-test-tool which contains the XML along with other details that may be relevant.
XML was a bit hard to read. Here it is formatted properly: <?xml version="1.0"?> <domain type="qemu" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>guestfs-tyijp20j0jccelb4</name> <memory unit="MiB">500</memory> <currentMemory unit="MiB">500</currentMemory> <vcpu>1</vcpu> <clock offset="utc"/> <os> <type>hvm</type> <kernel>/home/rjones/d/libguestfs/tmp/.guestfs-1000/kernel.21165</kernel> <initrd>/home/rjones/d/libguestfs/tmp/.guestfs-1000/initrd.21165</initrd> <cmdline>panic=1 console=ttyS0 udevtimeout=600 no_timer_check lpj=2893428 acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline> </os> <on_reboot>destroy</on_reboot> <devices> <controller type="scsi" index="0" model="virtio-scsi"/> <disk device="disk" type="file"> <source file="/tmp/libguestfs-test-tool-sda-4waQen"/> <target dev="sda" bus="scsi"/> <driver name="qemu" type="raw" cache="none"/> <address type="drive" controller="0" bus="0" target="0" unit="0"/> </disk> <disk type="file" device="disk"> <source file="/home/rjones/d/libguestfs/tmp/libguestfsyZxbFW/snapshot1"/> <target dev="sdb" bus="scsi"/> <driver name="qemu" type="qcow2" cache="unsafe"/> <address type="drive" controller="0" bus="0" target="1" unit="0"/> <shareable/> </disk> <serial type="unix"> <source mode="connect" path="/home/rjones/d/libguestfs/tmp/libguestfsyZxbFW/console.sock"/> <target port="0"/> </serial> <channel type="unix"> <source mode="connect" path="/home/rjones/d/libguestfs/tmp/libguestfsyZxbFW/guestfsd.sock"/> <target type="virtio" name="org.libguestfs.channel.0"/> </channel> </devices> <qemu:commandline> <qemu:env name="TMPDIR" value="/home/rjones/d/libguestfs/tmp"/> </qemu:commandline> </domain>
Ok here is the XML disk path <disk type="file" device="disk"> <source file="/home/rjones/d/libguestfs/tmp/libguestfsyZxbFW/snapshot1"/> <target dev="sdb" bus="scsi"/> <driver name="qemu" type="qcow2" cache="unsafe"/> <address type="drive" controller="0" bus="0" target="1" unit="0"/> <shareable/> </disk> Now, I assuming that qcow2 file has a backing file of /home/rjones/d/libguestfs/tmp/.guestfs-1000/root.7923 And when QEMU says could not open disk image /home/rjones/d/libguestfs/tmp/libguestfso7pUI1/snapshot1: Permission denied it probably actually means could not open disk image /home/rjones/d/libguestfs/tmp/.guestfs-1000/root.7923 the strange thing is that libvirt relabels qcow2 backing files, so that shouldn't have 'tmp_t' context Be interested to see the libvirt log output with log_filters="1:security 1:qemu" set while starting this guest, to verify what libvirt has labelled.
FYI here are the labels on the files and directories: $ ls -alZR tmp tmp: drwxrwxr-x. rjones rjones system_u:object_r:tmp_t:s0 . drwxrwxr-x. rjones rjones unconfined_u:object_r:user_home_t:s0 .. -rw-rw-r--. rjones rjones unconfined_u:object_r:user_home_t:s0 .gitignore drwxr-xr-x. rjones rjones unconfined_u:object_r:user_tmp_t:s0 .guestfs-1000 tmp/.guestfs-1000: drwxr-xr-x. rjones rjones unconfined_u:object_r:user_tmp_t:s0 . drwxrwxr-x. rjones rjones system_u:object_r:tmp_t:s0 .. -rwxr-xr-x. rjones rjones unconfined_u:object_r:user_tmp_t:s0 checksum -rw-r--r--. rjones rjones unconfined_u:object_r:user_home_t:s0 initrd -rw-r--r--. rjones rjones unconfined_u:object_r:user_home_t:s0 initrd.2130 -rw-r--r--. rjones rjones unconfined_u:object_r:user_home_t:s0 kernel -rw-r--r--. rjones rjones unconfined_u:object_r:user_home_t:s0 kernel.2130 -rw-r--r--. rjones rjones unconfined_u:object_r:user_tmp_t:s0 root -rw-r--r--. rjones rjones unconfined_u:object_r:user_tmp_t:s0 root.2130
Created attachment 682351 [details] libvirtd.log (xz compressed)
Looks like the same problem occurs in libvirt.git: *stdin*:4: libguestfs: error: could not create appliance through libvirt: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/home/rjones/d/libguestfs/tmp/libguestfs4zc4TQ/snapshot2,if=none,id=drive-scsi0-0-0-0,format=qcow2: could not open disk image /home/rjones/d/libguestfs/tmp/libguestfs4zc4TQ/snapshot2: Permission denied 2013-01-22 18:07:42.101+0000: shutting down [code=1 domain=10]
The log shows the following info : virSecuritySELinuxSetFileconHelper:794 : Setting SELinux context on '/tmp/libguestfs-test-tool-sda-X83aWF' to 'unconfined_u:object_r:svirt_image_t:s0:c264,c810' 2013-01-18 11:20:14.038+0000: 4212: info : virSecuritySELinuxSetFileconHelper:794 : Setting SELinux context on '/home/rjones/d/libguestfs/tmp/libguestfsdS2xGQ/snapshot1' to 'system_u:object_r:svirt_image_t:s0' 2013-01-18 11:20:14.038+0000: 4212: info : virSecuritySELinuxSetFileconHelper:794 : Setting SELinux context on '/home/rjones/d/libguestfs/tmp/libguestfsdS2xGQ/console.sock' to 'unconfined_u:object_r:svirt_image_t:s0:c264,c810' 2013-01-18 11:20:14.038+0000: 4212: info : virSecuritySELinuxSetFileconHelper:794 : Setting SELinux context on '/home/rjones/d/libguestfs/tmp/libguestfsdS2xGQ/guestfsd.sock' to 'unconfined_u:object_r:svirt_image_t:s0:c264,c810' 2013-01-18 11:20:14.038+0000: 4212: info : virSecuritySELinuxSetFileconHelper:794 : Setting SELinux context on '/home/rjones/d/libguestfs/tmp/.guestfs-1000/kernel.4157' to 'system_u:object_r:virt_content_t:s0' 2013-01-18 11:20:14.039+0000: 4212: info : virSecuritySELinuxSetFileconHelper:794 : Setting SELinux context on '/home/rjones/d/libguestfs/tmp/.guestfs-1000/initrd.4157' to 'system_u:object_r:virt_content_t:s0' Notice that it doesn't set the label on the backing store. Clearly this is wrong.
I think the regression is caused by this change commit 38c4a9cc40476bd2e598a1876b1254e88a09c760 Author: Eric Blake <eblake> Date: Tue Oct 9 16:08:14 2012 -0600 storage: use cache to walk backing chain We used to walk the backing file chain at least twice per disk, once to set up cgroup device whitelisting, and once to set up security labeling. Rather than walk the chain every iteration, which possibly includes calls to fork() in order to open root-squashed NFS files, we can exploit the cache of the previous patch. * src/conf/domain_conf.h (virDomainDiskDefForeachPath): Alter signature. * src/conf/domain_conf.c (virDomainDiskDefForeachPath): Require caller to supply backing chain via disk, if recursion is desired. * src/security/security_dac.c (virSecurityDACSetSecurityImageLabel): Adjust caller. * src/security/security_selinux.c (virSecuritySELinuxSetSecurityImageLabel): Likewise. * src/security/virt-aa-helper.c (get_files): Likewise. * src/qemu/qemu_cgroup.c (qemuSetupDiskCgroup) (qemuTeardownDiskCgroup): Likewise. (qemuSetupCgroup): Pre-populate chain. Previously virDomainDiskDefForeachPath would examine disk files to identify the backing stores. Now that method relies on disk->backingChain having been pre-populated. The above change alters qemuSetupCgroup to invoke qemuDomainDetermineDiskChain, but this code will never run if cgroups are not active. This is the case for session libvirtd of course. I think the call to qemuDomainDetermineDiskChain needs to be moved out of the cgroups code, and into qemuProcessStart instead
(In reply to comment #12) > I think the regression is caused by this change > > commit 38c4a9cc40476bd2e598a1876b1254e88a09c760 > Author: Eric Blake <eblake> > Date: Tue Oct 9 16:08:14 2012 -0600 > > storage: use cache to walk backing chain > > Previously virDomainDiskDefForeachPath would examine disk files to identify > the backing stores. Now that method relies on disk->backingChain having been > pre-populated. > > The above change alters qemuSetupCgroup to invoke > qemuDomainDetermineDiskChain, but this code will never run if cgroups are > not active. This is the case for session libvirtd of course. I think the > call to qemuDomainDetermineDiskChain needs to be moved out of the cgroups > code, and into qemuProcessStart instead Aha - that definitely explains it, and moving the pre-populate out of cgroup-specific stuff should indeed help.
Proposed patch: https://www.redhat.com/archives/libvir-list/2013-February/msg01174.html
Fix will appear in 1.0.3 at end of Feb; is that soon enough, or do we need a rawhide 1.0.2-3 build with this backported? commit 82d5fe543720da6d83c1d6bfa1c347d7d9fda278 Author: Eric Blake <eblake> Date: Wed Feb 20 15:34:48 2013 -0700 qemu: check backing chains even when cgroup is omitted https://bugzilla.redhat.com/show_bug.cgi?id=896685 points out a regression caused by commit 38c4a9c - libvirt only labels the backing chain if the backing chain cache is populated, but the code to populate the cache was only conditionally performed if cgroup labeling was necessary. * src/qemu/qemu_cgroup.c (qemuSetupCgroup): Hoist cache setup... * src/qemu/qemu_process.c (qemuProcessStart): ...earlier into caller, where it is now unconditional.
Eric, I will do the backport to Rawhide shortly.