Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Federico,
is /usr/share/vdsm/vdsm in the game?
Basically we label it as virtd_exec_t and we have
qemu_domtrans(virtd_t)
in RHEL6.
Comment 6Federico Simoncelli
2013-09-11 22:12:49 UTC
(In reply to Miroslav Grepl from comment #5)
> Federico,
> is /usr/share/vdsm/vdsm in the game?
It shouldn't be. VDSM sends the request to start the vm to libvirt... but it's not involved in file labeling (and it shoulnd't affect any other process context as well).
Ok, I did a test build without
qemu_domtrans(virtd_t)
and of course now we see
# ps -eZ |grep qemu
system_u:system_r:virtd_t:s0-s0:c0.c1023 11966 ? 00:00:15 qemu-kvm
for a virtual machine.
Ok, I got it. The problem is
security_driver="none"
and if we have
qemu_domtrans(virtd_t)
we end in the qemu_t domain.
Dan,
I believe we want to end up with this domain in RHEL6 (MLS).
I have been doing more testing. Basically I wanted to keep the transition and make qemu as unconfined domain to make sure we won't break anything.
But yes,
I am going to remove it because it looks OK.
(In reply to Federico Simoncelli from comment #16)
It is. with the fix applied, security_driver="none" would be set --- as intended --- only in Ubuntu.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2013-1598.html
Description of problem: Unable to start a QEMU process due to selinux permission errors. Version-Release number of selected component (if applicable): libvirt-0.10.2-18.el6_4.10.x86_64 qemu-kvm-rhev-0.12.1.2-2.384.el6.x86_64 selinux-policy-3.7.19-213.el6.noarch How reproducible: 100% Steps to Reproduce: 1. Start a QEMU process using libvirt. Actual results: The qemu process dies. Expected results: The qemu process should start and run properly. Additional info: SELinux messages: Sep 11 04:34:21 vm-rhev1 kernel: type=1400 audit(1378888461.700:66603): avc: denied { getattr } for pid=6342 comm="qemu-kvm" name="/" dev=0:1d ino=524313 scontext=system_u:system_r:qemu_t:s0-s0:c0.c1023 tcontex t=system_u:object_r:nfs_t:s0 tclass=filesystem Sep 11 04:34:21 vm-rhev1 kernel: type=1400 audit(1378888461.701:66604): avc: denied { read } for pid=6342 comm="qemu-kvm" name="dm-26" dev=devtmpfs ino=50557 scontext=system_u:system_r:qemu_t:s0-s0:c0.c1023 tco ntext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file Sep 11 04:34:21 vm-rhev1 kernel: type=1400 audit(1378888461.701:66605): avc: denied { getattr } for pid=6342 comm="qemu-kvm" path="/dev/dm-26" dev=devtmpfs ino=50557 scontext=system_u:system_r:qemu_t:s0-s0:c0.c 1023 tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file Sep 11 04:34:21 vm-rhev1 kernel: type=1400 audit(1378888461.701:66606): avc: denied { read write } for pid=6342 comm="qemu-kvm" name="dm-26" dev=devtmpfs ino=50557 scontext=system_u:system_r:qemu_t:s0-s0:c0.c10 23 tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file QEMU command line: LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name BlockVm1 -S -M rhel6.4.0 -cpu Conroe -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -uuid 56247ac3-b8c5-4763-b779-744a5b7a9a53 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=6Server-6.4.0.4.el6_4,serial=F83A2D23-754C-4FB9-BCB5-801840A24575_52:54:00:a2:45:75,uuid=56247ac3-b8c5-4763-b779-744a5b7a9a53 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/BlockVm1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-09-11T08:36:02,driftfix=slew -no-shutdown -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/rhev/data-center/mnt/vm-rhsrv1.in1.bytenix.com:_srv_export1_nfs_iso1/fe5f6a89-bb6d-4e58-8b67-c8cf505fb3ac/images/11111111-1111-1111-1111-111111111111/Fedora-17-x86_64-netinst.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/var/run/vdsm/storage/d26915e8-9049-43a3-ba74-e403730875dc/12f9a99c-4a12-47ae-8cce-e0912eb96d24/0562485f-64f5-4972-b991-84d8da170410,if=none,id=drive-virtio-disk0,format=raw,serial=12f9a99c-4a12-47ae-8cce-e0912eb96d24,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -drive file=/var/run/vdsm/storage/d26915e8-9049-43a3-ba74-e403730875dc/ebec629e-5ad5-4704-ab00-096c4df898e6/3812141a-5e41-4f98-ab07-4143160fdb7e,if=none,id=drive-virtio-disk1,format=qcow2,serial=ebec629e-5ad5-4704-ab00-096c4df898e6,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/56247ac3-b8c5-4763-b779-744a5b7a9a53.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/56247ac3-b8c5-4763-b779-744a5b7a9a53.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5900,addr=0,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 Images Symlinks: /var/run/vdsm/storage/d26915e8-9049-43a3-ba74-e403730875dc/12f9a99c-4a12-47ae-8cce-e0912eb96d24: lrwxrwxrwx. vdsm kvm unconfined_u:object_r:virt_var_run_t:s0 0562485f-64f5-4972-b991-84d8da170410 -> /dev/d26915e8-9049-43a3-ba74-e403730875dc/0562485f-64f5-4972-b991-84d8da170410 /var/run/vdsm/storage/d26915e8-9049-43a3-ba74-e403730875dc/ebec629e-5ad5-4704-ab00-096c4df898e6: lrwxrwxrwx. vdsm kvm unconfined_u:object_r:virt_var_run_t:s0 3812141a-5e41-4f98-ab07-4143160fdb7e -> /dev/d26915e8-9049-43a3-ba74-e403730875dc/3812141a-5e41-4f98-ab07-4143160fdb7e