RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1191802 - security labels are changed for virtlocked disks
Summary: security labels are changed for virtlocked disks
Keywords:
Status: CLOSED DUPLICATE of bug 547546
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-02-12 02:52 UTC by Yang Yang
Modified: 2016-03-14 01:55 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-11 09:28:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
/var/log/libvirt/qemu/vm3.log (14.85 KB, text/plain)
2015-02-12 02:53 UTC, Yang Yang
no flags Details

Description Yang Yang 2015-02-12 02:52:12 UTC
Description of problem:
Enable virtlockd. Prepare 2 vms pointing at the same disk with non-shareable mode. Start 1st vm. Then start 2nd vm. Finally, 1st guest os failed to boot because SELinux is preventing /usr/libexec/qemu-kvm from read access on the file /var/lib/libvirt/images/vm1.raw

Version-Release number of selected component (if applicable):
libvirt-1.2.8-16.el7.x86_64
selinux-policy-3.13.1-23.el7.noarch

How reproducible:
100%

Steps to Reproduce:
1.# getenforce
Enforcing

# grep lock_ /etc/libvirt/qemu.conf
lock_manager = "lockd"

# vim /etc/libvirt/qemu-lockd.conf
auto_disk_leases = 1
require_lease_for_disks = 1
file_lockspace_dir = "/var/lib/libvirt/lockd/files"

#systemctl restart libvirtd

2.prepare 2 vms with the following xml
<disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/vm1.raw'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

3.start the 2 vms
# virsh start vm3
Domain vm3 started

# virsh start vm5
error: Failed to start domain vm5
error: resource busy Lockspace resource '19c1987e09bfe9af2a7f2756b19460f5750e147333b2e36854479915ce44a19c' is locked

4. check 1st guest
Failed to boot os with the following error
error: disk 'hd0,msdos1' not found

5. destroy/start 1st vm
# virsh destroy vm3; virsh start vm3
Domain vm3 destroyed

Domain vm3 started

os booted successfully

6. destroy 1st vm, disable selinux, then repeat step 3
# virsh destroy vm3
Domain vm3 destroyed

#getenforce
Permissive

start the 2 vms
# virsh start vm3
Domain vm3 started

# virsh start vm5
error: Failed to start domain vm5
error: resource busy Lockspace resource '19c1987e09bfe9af2a7f2756b19460f5750e147333b2e36854479915ce44a19c' is locked

Check the 1st guest OS
boot successfully.
 
Actual results:
2nd vm startup leads to 1st vm boot failure. Only start 1st vm, os boot successfully. Disable selinux, start the 2 vms, 1 guest os boot successfully.

Expected results:
2nd vm startup should not do harm to boot 1st guest

Additional info:

#tail -f /var/log/message
Feb 12 10:21:02 rhel7_test setroubleshoot: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file /var/lib/libvirt/images/vm1.raw. For complete SELinux messages. run sealert -l 8021808f-575f-4ee9-883f-ade9da239f6f
Feb 12 10:21:02 rhel7_test python: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file /var/lib/libvirt/images/vm1.raw.

*****  Plugin catchall (100. confidence) suggests   **************************

If you believe that qemu-kvm should be allowed read access on the vm1.raw file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# grep qemu-kvm /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp

Feb 12 10:21:08 rhel7_test dbus-daemon: 'list' object has no attribute 'split'

#cat /var/log/audit/audit.log

type=AVC msg=audit(1423707657.848:14835): avc:  denied  { read } for  pid=15833 comm="qemu-kvm" path="/var/lib/libvirt/images/vm1.raw" dev="sda3" ino=269272577 scontext=system_u:system_r:svirt_t:s0:c715,c981 tcontext=system_u:object_r:virt_image_t:s0 tclass=file
type=SYSCALL msg=audit(1423707657.848:14835): arch=c000003e syscall=17 success=no exit=-13 a0=11 a1=7fabc0eef800 a2=200 a3=0 items=0 ppid=1 pid=15833 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c715,c981 key=(null)

Comment 1 Yang Yang 2015-02-12 02:53:10 UTC
Created attachment 990726 [details]
/var/log/libvirt/qemu/vm3.log

Comment 2 Yang Yang 2015-02-12 03:05:01 UTC
After 1st guest os booted, start 2nd vm. It will cause 1st guest os crash.

Comment 3 Yang Yang 2015-02-12 03:16:27 UTC
The real problem is that the svirt label of image file is erased after 2nd vm startup failed.

# getenforce
Enforcing

# virsh start vm3
Domain vm3 started

# ll /var/lib/libvirt/images/vm1.raw -Z
-rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0:c555,c775 /var/lib/libvirt/images/vm1.raw

# virsh start vm5
error: Failed to start domain vm5
error: resource busy Lockspace resource '19c1987e09bfe9af2a7f2756b19460f5750e147333b2e36854479915ce44a19c' is locked

# ll /var/lib/libvirt/images/vm1.raw -Z
-rw-r--r--. root root system_u:object_r:virt_image_t:s0 /var/lib/libvirt/images/vm1.raw

Comment 5 Ján Tomko 2015-06-24 13:58:36 UTC
Libvirt needs to set the SELinux labels before starting QEMU.
The disks need to be locked after forking the process for the domain, because the PID is needed to lock it.

This could be solved by adding two more steps of handshake before running QEMU, but that won't solve migration on shared storage that supports NFS.

Comment 7 Han Han 2015-11-20 03:01:31 UTC
The cause of this bug is the ownership changed from qemu:qemu to root:root by libvirtd after the second guest start failure due to lock. It's about ownership change not selinux label change.
If a guest created by virt-manager, its disk image mode bits is 600 and ownership is root:root.
When the first guest starts, the disk image ownership is change to qemu:qemu by libvirtd to make it accessible to qemu-kvm. When the second guest starts failed, we detect that libvirtd change the disk image ownership to root:root. Qemu-kvm cannot access it and the first guest shows disk 'hd0,msdos1' not found.
If we change mode bits to 606, the problem disappears. But it's not so secure.
Though we can setfacl to qemu user with read&write, but it is a little complex and not supported in NFS.
So I suggest that libvirtd should not change the ownership when start failed due to lock.

Comment 8 Ján Tomko 2016-02-11 09:28:45 UTC
Virtlockd's purpose is to protect the disk content from simultaneous writes by different VMs.
https://www.redhat.com/archives/libvir-list/2016-January/msg01104.html

Not changing the disk image's ownership and labels is the security driver's responsibility.

*** This bug has been marked as a duplicate of bug 547546 ***


Note You need to log in before you can comment on or make changes to this bug.