RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 835936 - [selinux-policy] AVC when trying to start qemu-kvm domain (guest) on posix compliant file-system
Summary: [selinux-policy] AVC when trying to start qemu-kvm domain (guest) on posix co...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy
Version: 6.4
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: Michal Trunecka
URL:
Whiteboard:
: 855287 (view as bug list)
Depends On:
Blocks: 867395
TreeView+ depends on / blocked
 
Reported: 2012-06-27 15:00 UTC by Haim
Modified: 2014-09-30 23:33 UTC (History)
14 users (show)

Fixed In Version: selinux-policy-3.7.19-159.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-21 08:24:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0314 0 normal SHIPPED_LIVE selinux-policy bug fix and enhancement update 2013-02-20 20:35:01 UTC

Description Haim 2012-06-27 15:00:53 UTC
Description of problem:

unable to start vm when selinux is enabled on disk is located om posix file system (gluster).

qemu-command:

LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name glusterVM -uuid 5844cfd3-2ffd-49fe-9315-76066af80172 -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=6Server-6.3.0.2.el6,serial=38373035-3536-4247-3830-33333434394D_78:E7:D1:E4:8E:DA,uuid=5844cfd3-2ffd-49fe-9315-76066af80172 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/glusterVM.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2012-06-27T17:56:17,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/def8e8d2-9711-4adb-b86d-309051c7027a/748039c1-9e96-459c-809f-6590fe11a37b/images/46be20df-44c7-4ed5-b639-d6fe77d9fcb7/7bd78e6c-ad09-438e-9335-529344eabc18,if=none,id=drive-virtio-disk0,format=raw,serial=46be20df-44c7-4ed5-b639-d6fe77d9fcb7,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/glusterVM.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -spice port=5900,tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
char device redirected to /dev/pts/1
qemu-kvm: -drive file=/rhev/data-center/def8e8d2-9711-4adb-b86d-309051c7027a/748039c1-9e96-459c-809f-6590fe11a37b/images/46be20df-44c7-4ed5-b639-d6fe77d9fcb7/7bd78e6c-ad09-438e-9335-529344eabc18,if=none,id=drive-virtio-disk0,format=raw,serial=46be20df-44c7-4ed5-b639-d6fe77d9fcb7,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image /rhev/data-center/def8e8d2-9711-4adb-b86d-309051c7027a/748039c1-9e96-459c-809f-6590fe11a37b/images/46be20df-44c7-4ed5-b639-d6fe77d9fcb7/7bd78e6c-ad09-438e-9335-529344eabc18: Permission denied
2012-06-27 17:56:17.606+0000: shutting down

versions:

libselinux-utils-2.0.94-5.3.el6.x86_64
libvirt-python-0.9.10-21.el6.x86_64
selinux-policy-3.7.19-147.el6.noarch
vdsm-python-4.9.6-17.0.el6.noarch
qemu-kvm-rhev-tools-0.12.1.2-2.291.el6.x86_64
libvirt-0.9.10-21.el6.x86_64
libvirt-lock-sanlock-0.9.10-21.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.291.el6.x86_64
libvirt-client-0.9.10-21.el6.x86_64
vdsm-4.9.6-17.0.el6.x86_64
selinux-policy-targeted-3.7.19-147.el6.noarch
vdsm-cli-4.9.6-17.0.el6.noarch
libselinux-2.0.94-5.3.el6.x86_64
libselinux-python-2.0.94-5.3.el6.x86_64
qemu-kvm-rhev-debuginfo-0.12.1.2-2.291.el6.x86_64

security context of disk-link:

[root@nott-vds3 ~]# ls -Z /rhev/data-center/def8e8d2-9711-4adb-b86d-309051c7027a/748039c1-9e96-459c-809f-6590fe11a37b/images/46be20df-44c7-4ed5-b639-d6fe77d9fcb7/7bd78e6c-ad09-438e-9335-529344eabc18
-rw-rw----. vdsm kvm system_u:object_r:fusefs_t:s0    /rhev/data-center/def8e8d2-9711-4adb-b86d-309051c7027a/748039c1-9e96-459c-809f-6590fe11a37b/images/46be20df-44c7-4ed5-b639-d6fe77d9fcb7/7bd78e6c-ad09-438e-9335-529344eabc18

AVC from audit.log:

type=VIRT_RESOURCE msg=audit(1340819777.238:77626): user pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=cgroup reason=allow vm="glusterVM" uuid=5844cfd3-2ffd-49fe-9315-76066af80172 cgroup="/cgroup/devices/libvirt/qemu/glusterVM/" class=path path=/dev/hpet rdev=0A:E4 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
type=AVC msg=audit(1340819777.466:77627): avc:  denied  { write } for  pid=18212 comm="qemu-kvm" name="7bd78e6c-ad09-438e-9335-529344eabc18" dev=fuse ino=11242441530065887670 scontext=system_u:system_r:svirt_t:s0:c475,c923 tcontext=system_u:object_r:fusefs_t:s0 tclass=file
type=SYSCALL msg=audit(1340819777.466:77627): arch=c000003e syscall=2 success=no exit=-13 a0=7febfb6a0860 a1=84002 a2=0 a3=48 items=0 ppid=1 pid=18212 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c475,c923 key=(null)
type=VIRT_RESOURCE msg=audit(1340819777.693:77628): user pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=disk reason=start vm="glusterVM" uuid=5844cfd3-2ffd-49fe-9315-76066af80172 old-disk="?" new-disk="/rhev/data-center/def8e8d2-9711-4adb-b86d-309051c7027a/748039c1-9e96-459c-809f-6590fe11a37b/images/46be20df-44c7-4ed5-b639-d6fe77d9fcb7/7bd78e6c-ad09-438e-9335-529344eabc18" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
type=VIRT_RESOURCE msg=audit(1340819777.693:77629): user pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=mem reason=start vm="glusterVM" uuid=5844cfd3-2ffd-49fe-9315-76066af80172 old-mem=0 new-mem=524288 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
type=VIRT_RESOURCE msg=audit(1340819777.693:77630): user pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=vcpu reason=start vm="glusterVM" uuid=5844cfd3-2ffd-49fe-9315-76066af80172 old-vcpu=0 new-vcpu=1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'
type=VIRT_CONTROL msg=audit(1340819777.693:77631): user pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm op=start reason=booted vm="glusterVM" uuid=5844cfd3-2ffd-49fe-9315-76066af80172 vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'
type=USER_CMD msg=audit(1340819779.397:77632): user pid=18248 uid=0 auid=0 ses=60 subj=unconfined_u:system_r:virtd_t:s0-s0:c0.c1023 msg='cwd="/root" cmd=2F7362696E2F73657276696365206B736D74756E656420726574756E65 terminal=? res=success'
type=CRED_ACQ msg=audit(1340819779.407:77633): user pid=18249 uid=0 auid=0 ses=60 subj=unconfined_u:system_r:virtd_t:s0-s0:c0.c1023 msg='op=PAM:setcred acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
type=USER_START msg=audit(1340819779.408:77634): user pid=18249 uid=0 auid=0 ses=60 subj=unconfined_u:system_r:virtd_t:s0-s0:c0.c1023 msg='op=PAM:session_open acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'

Comment 2 Milos Malik 2012-06-28 08:20:36 UTC
Could you execute following command and retest your scenario?

# setsebool -P virt_use_fusefs on

Comment 3 Milos Malik 2012-06-28 08:23:22 UTC
I'm sorry, don't do that. The write permission is missing --> the result will be the same.

# sesearch -s svirt_t -t fusefs_t -c file --allow -C
Found 1 semantic av rules:
DT allow svirt_t fusefs_t : file { ioctl read getattr lock open } ; [ virt_use_fusefs ]

#

Comment 4 Miroslav Grepl 2012-06-28 08:46:56 UTC
We need to add

fs_rw_inherited_noxattr_fs_files(virt_domain)

This can be fixed by the following local policy 

# cat myvirt.te
policy_module(myvirt.te)

require{
 attribute noxattrfs;
 attribute virt_domain;
}

allow virt_domain noxattrfs:file rw_inherited_file_perms;




and executing

# make -f /usr/share/selinux/devel/Makefile myvirt.pp
# semodule -i myvirt.pp

Comment 5 Miroslav Grepl 2012-08-08 08:11:08 UTC
Fixed in selinux-policy-3.7.19-159.el6

Comment 7 Miroslav Grepl 2012-10-09 12:36:24 UTC
*** Bug 855287 has been marked as a duplicate of this bug. ***

Comment 12 errata-xmlrpc 2013-02-21 08:24:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0314.html


Note You need to log in before you can comment on or make changes to this bug.