RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 967269 - Failed run VM on Host with SELinux running in Enforcing mode
Summary: Failed run VM on Host with SELinux running in Enforcing mode
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy
Version: 6.5
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: rc
: 6.5
Assignee: Miroslav Grepl
QA Contact: vvyazmin@redhat.com
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-26 07:51 UTC by vvyazmin@redhat.com
Modified: 2018-12-02 15:54 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-09 16:22:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
## Logs rhevm, vdsm, SELinux (1.69 MB, application/x-gzip)
2013-05-26 07:51 UTC, vvyazmin@redhat.com
no flags Details
vm XML file (5.09 KB, text/plain)
2013-05-27 06:19 UTC, yanbing du
no flags Details

Description vvyazmin@redhat.com 2013-05-26 07:51:37 UTC
Created attachment 753255 [details]
## Logs rhevm, vdsm, SELinux

Description of problem:
Failed run VM  on Host with SELinux running in Enforcing mode

Version-Release number of selected component (if applicable):
RHEVM 3.2 - SF17.1 environment:

RHEVM: rhevm-3.2.0-11.28.el6ev.noarch
VDSM: vdsm-4.10.2-21.0.el6ev.x86_64
LIBVIRT: libvirt-0.10.2-18.el6_4.5.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.355.el6_4.3.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create VM, and run it on host with SELinux Enforcing mode
  
Actual results:
Failed run VM

Expected results:
Succeed run VM with SELinux Enforcing mode

Workaround:
run on host: 
1. setenforce 0
2. And run vm again

Additional info:

/var/log/ovirt-engine/engine.log

/var/log/vdsm/vdsm.log
Thread-590::DEBUG::2013-05-26 07:04:26,325::libvirtconnection::128::vds::(wrapper) Unknown libvirterror: ecode: 1 edom: 10 level: 2 message: internal error process exited while 
connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038ac1b
c30/9126a647-bb92-43b9-957f-ea6ebf7c4647,if=none,id=drive-virtio-disk0,format=qcow2,serial=3c77908e-18a7-40a1-a8aa-1e038ac1bc30,cache=none,werror=stop,rerror=stop,aio=native: co
uld not open disk image /rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038ac1bc30/9126a647-bb92-43b
9-957f-ea6ebf7c4647: Permission denied

Thread-590::DEBUG::2013-05-26 07:04:26,325::vm::678::vm.Vm::(_startUnderlyingVm) vmId=`5cf3399b-19d7-4a6b-98f2-ff5aca420653`::_ongoingCreations released
Thread-590::ERROR::2013-05-26 07:04:26,326::vm::704::vm.Vm::(_startUnderlyingVm) vmId=`5cf3399b-19d7-4a6b-98f2-ff5aca420653`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 664, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1535, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 104, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2645, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014
045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038ac1bc30/9126a647-bb92-43b9-957f-ea6ebf7c4647,if=none,id=drive-virtio-disk0,format=qcow2,serial=3c77908e-18a7-40a1-a8aa-1e038ac1bc30
,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77
908e-18a7-40a1-a8aa-1e038ac1bc30/9126a647-bb92-43b9-957f-ea6ebf7c4647: Permission denied

Thread-590::DEBUG::2013-05-26 07:04:26,333::vm::1092::vm.Vm::(setDownStatus) vmId=`5cf3399b-19d7-4a6b-98f2-ff5aca420653`::Changed state to Down: internal error process exited wh
ile connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038
ac1bc30/9126a647-bb92-43b9-957f-ea6ebf7c4647,if=none,id=drive-virtio-disk0,format=qcow2,serial=3c77908e-18a7-40a1-a8aa-1e038ac1bc30,cache=none,werror=stop,rerror=stop,aio=native
: could not open disk image /rhev/data-center/34e6f042-d850-4835-9639-d0b8e3ab3f56/8d71315a-1f72-4697-90c4-014045f2d1cf/images/3c77908e-18a7-40a1-a8aa-1e038ac1bc30/9126a647-bb92
-43b9-957f-ea6ebf7c4647: Permission denied

Comment 2 yanbing du 2013-05-27 06:18:11 UTC
Hi,
  I'm trying to reproduce this bug, but the vm i create(just add it from webUI, and the vm XML will attach) can run successfully, can you give more info about how to reproduce it? Thanks!

My environment:
# getenforce 
Enforcing
# rpm -q libvirt
libvirt-0.10.2-18.el6_4.5.x86_64
# rpm -q vdsm
vdsm-4.10.2-21.0.el6ev.x86_64
# rpm -q qemu-kvm-rhev
qemu-kvm-rhev-0.12.1.2-2.355.el6_4.3.x86_64

Comment 3 yanbing du 2013-05-27 06:19:03 UTC
Created attachment 753483 [details]
vm XML file

Comment 4 Martin Kletzander 2013-05-31 14:44:35 UTC
Moving to vdsm as libvirt is not responsible for correct image labeling in RHEV scenarios.

Comment 9 Federico Simoncelli 2013-07-09 07:51:00 UTC
It looks like the selinux policy for libvirt is not allowing to read symlinks:

$ audit2allow -i audit_tigris02.log

#============= svirt_t ==============
allow svirt_t file_t:lnk_file read;

Moving to selinux-policy.

Comment 11 Miroslav Grepl 2013-07-09 08:30:19 UTC
The problem is with "file_t" labeling which means no SELinux label.

Could you try to re-mount

/rhev/data-center

Comment 12 Federico Simoncelli 2013-07-09 12:58:10 UTC
(In reply to Miroslav Grepl from comment #11)
> The problem is with "file_t" labeling which means no SELinux label.
> 
> Could you try to re-mount
> 
> /rhev/data-center

/rhev/data-center is not a mount, anyway I see your point, it could have been that /rhev/data-center needed relabeling (everything inside automatically gets the appropriate labeling on connectStoragePool when the links are re-created).

Comment 13 Daniel Walsh 2013-07-09 16:22:02 UTC
Yes this is SELinux blocking svirt_t from  reading a symlink without a label.


Note You need to log in before you can comment on or make changes to this bug.