Bug 967269 - Failed run VM on Host with SELinux running in Enforcing mode
Failed run VM on Host with SELinux running in Enforcing mode
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy (Show other bugs)
6.5
x86_64 Linux
unspecified Severity urgent
: rc
: 6.5
Assigned To: Miroslav Grepl
vvyazmin@redhat.com
storage
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-26 03:51 EDT by vvyazmin@redhat.com
Modified: 2013-07-09 12:22 EDT (History)
17 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-09 12:22:02 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
## Logs rhevm, vdsm, SELinux (1.69 MB, application/x-gzip)
2013-05-26 03:51 EDT, vvyazmin@redhat.com
no flags Details
vm XML file (5.09 KB, text/plain)
2013-05-27 02:19 EDT, yanbing du
no flags Details

  None (edit)
Description vvyazmin@redhat.com 2013-05-26 03:51:37 EDT
Created attachment 753255 [details]
## Logs rhevm, vdsm, SELinux

Description of problem:
Failed run VM  on Host with SELinux running in Enforcing mode

Version-Release number of selected component (if applicable):
RHEVM 3.2 - SF17.1 environment:

RHEVM: rhevm-3.2.0-11.28.el6ev.noarch
VDSM: vdsm-4.10.2-21.0.el6ev.x86_64
LIBVIRT: libvirt-0.10.2-18.el6_4.5.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.355.el6_4.3.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create VM, and run it on host with SELinux Enforcing mode
  
Actual results:
Failed run VM

Expected results:
Succeed run VM with SELinux Enforcing mode

Workaround:
run on host: 
1. setenforce 0
2. And run vm again

Additional info:

/var/log/ovirt-engine/engine.log

/var/log/vdsm/vdsm.log
Thread-590::DEBUG::2013-05-26 07:04:26,325::libvirtconnection::128::vds::(wrapper) Unknown libvirterror: ecode: 1 edom: 10 level: 2 message: internal error process exited while 
connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038ac1b
c30/9126a647-bb92-43b9-957f-ea6ebf7c4647,if=none,id=drive-virtio-disk0,format=qcow2,serial=3c77908e-18a7-40a1-a8aa-1e038ac1bc30,cache=none,werror=stop,rerror=stop,aio=native: co
uld not open disk image /rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038ac1bc30/9126a647-bb92-43b
9-957f-ea6ebf7c4647: Permission denied

Thread-590::DEBUG::2013-05-26 07:04:26,325::vm::678::vm.Vm::(_startUnderlyingVm) vmId=`5cf3399b-19d7-4a6b-98f2-ff5aca420653`::_ongoingCreations released
Thread-590::ERROR::2013-05-26 07:04:26,326::vm::704::vm.Vm::(_startUnderlyingVm) vmId=`5cf3399b-19d7-4a6b-98f2-ff5aca420653`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 664, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1535, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 104, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2645, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014
045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038ac1bc30/9126a647-bb92-43b9-957f-ea6ebf7c4647,if=none,id=drive-virtio-disk0,format=qcow2,serial=3c77908e-18a7-40a1-a8aa-1e038ac1bc30
,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77
908e-18a7-40a1-a8aa-1e038ac1bc30/9126a647-bb92-43b9-957f-ea6ebf7c4647: Permission denied

Thread-590::DEBUG::2013-05-26 07:04:26,333::vm::1092::vm.Vm::(setDownStatus) vmId=`5cf3399b-19d7-4a6b-98f2-ff5aca420653`::Changed state to Down: internal error process exited wh
ile connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038
ac1bc30/9126a647-bb92-43b9-957f-ea6ebf7c4647,if=none,id=drive-virtio-disk0,format=qcow2,serial=3c77908e-18a7-40a1-a8aa-1e038ac1bc30,cache=none,werror=stop,rerror=stop,aio=native
: could not open disk image /rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038ac1bc30/9126a647-bb92
-43b9-957f-ea6ebf7c4647: Permission denied
Comment 2 yanbing du 2013-05-27 02:18:11 EDT
Hi,
  I'm trying to reproduce this bug, but the vm i create(just add it from webUI, and the vm XML will attach) can run successfully, can you give more info about how to reproduce it? Thanks!

My environment:
# getenforce 
Enforcing
# rpm -q libvirt
libvirt-0.10.2-18.el6_4.5.x86_64
# rpm -q vdsm
vdsm-4.10.2-21.0.el6ev.x86_64
# rpm -q qemu-kvm-rhev
qemu-kvm-rhev-0.12.1.2-2.355.el6_4.3.x86_64
Comment 3 yanbing du 2013-05-27 02:19:03 EDT
Created attachment 753483 [details]
vm XML file
Comment 4 Martin Kletzander 2013-05-31 10:44:35 EDT
Moving to vdsm as libvirt is not responsible for correct image labeling in RHEV scenarios.
Comment 9 Federico Simoncelli 2013-07-09 03:51:00 EDT
It looks like the selinux policy for libvirt is not allowing to read symlinks:

$ audit2allow -i audit_tigris02.log

#============= svirt_t ==============
allow svirt_t file_t:lnk_file read;

Moving to selinux-policy.
Comment 11 Miroslav Grepl 2013-07-09 04:30:19 EDT
The problem is with "file_t" labeling which means no SELinux label.

Could you try to re-mount

/rhev/data-center
Comment 12 Federico Simoncelli 2013-07-09 08:58:10 EDT
(In reply to Miroslav Grepl from comment #11)
> The problem is with "file_t" labeling which means no SELinux label.
> 
> Could you try to re-mount
> 
> /rhev/data-center

/rhev/data-center is not a mount, anyway I see your point, it could have been that /rhev/data-center needed relabeling (everything inside automatically gets the appropriate labeling on connectStoragePool when the links are re-created).
Comment 13 Daniel Walsh 2013-07-09 12:22:02 EDT
Yes this is SELinux blocking svirt_t from  reading a symlink without a label.

Note You need to log in before you can comment on or make changes to this bug.