Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 590975 - Fail to restore the saved domain because of avc denial.
Fail to restore the saved domain because of avc denial.
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.0
All Linux
low Severity medium
: rc
: ---
Assigned To: Laine Stump
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-05-11 02:24 EDT by Johnny Liu
Modified: 2010-11-11 09:50 EST (History)
7 users (show)

See Also:
Fixed In Version: libvirt-0_8_1-11_el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-11-11 09:50:30 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Johnny Liu 2010-05-11 02:24:04 EDT
Description of problem:
When selinux is enforcing, restore the saved domain will cause avc denial, so fail to restore.

Version-Release number of selected component (if applicable):
libvirt-0.8.1-2.el6.x86_64
kernel-2.6.32-24.el6.x86_64
qemu-kvm-0.12.1.2-2.51.el6.x86_64
# rpm -qa|grep selinux
libselinux-2.0.90-3.el6.x86_64
libselinux-python-2.0.90-3.el6.x86_64
selinux-policy-3.7.19-12.el6.noarch
libselinux-utils-2.0.90-3.el6.x86_64
selinux-policy-targeted-3.7.19-12.el6.noarch


How reproducible:
Always

Steps to Reproduce:
1. Make sure selinux is enforcing
# getenforce 
Enforcing
2. Save a running domain
# virsh save demo /tmp/demo.save
Domain demo saved to /tmp/demo.save
# ll -Z /tmp/demo.save
-rw-------. root root system_u:object_r:svirt_image_t:s0:c165,c811 /tmp/demo.save

3. Restore the saved domain
# virsh restore /tmp/demo.save 
Domain restored from /tmp/demo.save
# ps -efZ |grep kvm
system_u:system_r:svirt_t:s0:c207,c818 qemu 1922   1 16 01:06 ?        00:00:33 /usr/libexec/qemu-kvm -S -M rhel6.0.0 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name demo -uuid 7f6b41f0-2dea-51ae-9162-0b757ad010e6 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/demo.monitor,server,nowait -mon chardev=monitor,mode=control -rtc base=utc -boot c -drive file=/var/lib/libvirt/images/demo.img,if=none,id=drive-ide0-0-0,boot=on -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=25,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:0d:a5:09,bus=pci.0,addr=0x4 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -incoming exec:cat -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3

  
Actual results:
After virsh restore command, the domain will re-start automatically, not go back to the saved point. Fail to restore.

In log file, the following avc denial is seen:
type=AVC msg=audit(1273554370.058:70861): avc:  denied  { read } for  pid=1922 comm="qemu-kvm" path="/tmp/demo.save" dev=sda6 ino=983110 scontext=system_u:system_r:svirt_t:s0:c207,c818 tcontext=system_u:object_r:svirt_image_t:s0:c165,c811 tclass=file
type=SYSCALL msg=audit(1273554370.058:70861): arch=c000003e syscall=59 success=yes exit=0 a0=7f727046dd20 a1=7f7270470730 a2=7f727046dc30 a3=7f7284d25ef0 items=0 ppid=1 pid=1922 auid=500 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=1 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c207,c818 key=(null)

Expected results:
After virsh restore command, the domain should go back to the saved point directly, restore successfully.

Additional info:
Comment 2 RHEL Product and Program Management 2010-05-11 04:14:59 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.
Comment 3 Daniel Berrange 2010-05-11 07:00:00 EDT
KVM is running:

scontext=system_u:system_r:svirt_t:s0:c207,c818


While the disk is labelled:

tcontext=system_u:object_r:svirt_image_t:s0:c165,c811 


So for some reason libvirt is failing to set the correct label at restore
Comment 4 Jianjiao Sun 2010-05-27 22:16:26 EDT
On libvirt-0.8.1-7.el6 and the guest is rhel5.4 ,'save' and 'restore' successfully!
My enviroment:
[root@dhcp-65-73 /]# rpm -qa |grep libvirt
libvirt-client-0.8.1-7.el6.x86_64
libvirt-python-0.8.1-7.el6.x86_64
libvirt-0.8.1-7.el6.x86_64
[root@dhcp-65-73 /]# rpm -qa |grep qemu
qemu-img-0.12.1.2-2.68.el6.x86_64
gpxe-roms-qemu-0.9.7-6.3.el6.noarch
qemu-kvm-0.12.1.2-2.68.el6.x86_64
[root@dhcp-65-73 /]# rpm -q kernel
kernel-2.6.32-25.el6.x86_64
kernel-2.6.32-28.el6.x86_64
kernel-2.6.32-30.el6.x86_64

My procedure:
1. Start a VM

# virsh start rhel5.4_64

2. In guest, issue some command, such as: ls, or ifconfig

3. Save the running vm, and check the context of the save file.

# virsh save rhel5.4_64 /tmp/rhel5.4_64.save

5. Restore the vm

# virsh restore /tmp/rhel5.4_64.save

Then,"vm is restored successfully, and the output of the step 2 still is seen."
Comment 5 Jianjiao Sun 2010-05-27 23:15:50 EDT
(In reply to comment #4)
> On libvirt-0.8.1-7.el6 and the guest is rhel5.4 ,'save' and 'restore'
> successfully!
> My enviroment:
> [root@dhcp-65-73 /]# rpm -qa |grep libvirt
> libvirt-client-0.8.1-7.el6.x86_64
> libvirt-python-0.8.1-7.el6.x86_64
> libvirt-0.8.1-7.el6.x86_64
> [root@dhcp-65-73 /]# rpm -qa |grep qemu
> qemu-img-0.12.1.2-2.68.el6.x86_64
> gpxe-roms-qemu-0.9.7-6.3.el6.noarch
> qemu-kvm-0.12.1.2-2.68.el6.x86_64
> [root@dhcp-65-73 /]# rpm -q kernel
> kernel-2.6.32-25.el6.x86_64
> kernel-2.6.32-28.el6.x86_64
> kernel-2.6.32-30.el6.x86_64
> 
> My procedure:
> 1. Start a VM
> 
> # virsh start rhel5.4_64
> 
> 2. In guest, issue some command, such as: ls, or ifconfig
> 
> 3. Save the running vm, and check the context of the save file.
> 
> # virsh save rhel5.4_64 /tmp/rhel5.4_64.save
> 
> 5. Restore the vm
> 
> # virsh restore /tmp/rhel5.4_64.save
> 
> Then,"vm is restored successfully, and the output of the step 2 still is seen."    

Sorry! My below description is not clearly!
If saved directory is my $HOME,'save ' procedure will failed!
If saved directory is '/tmp','restore' will failed ,and start the guest normally,not my saved time action on my guest when I saved!
Comment 6 Laine Stump 2010-06-25 20:01:52 EDT
Although the example showed saving/restoring in /tmp, the problem existed for any directory.

Patches verified to fix the problem have been submitted upstream. Waiting for ACK:

https://www.redhat.com/archives/libvir-list/2010-June/msg00661.html
Comment 8 Dave Allan 2010-06-28 23:03:06 EDT
libvirt-0_8_1-11_el6 has been built in RHEL-6-candidate with the fix.

Dave
Comment 9 Johnny Liu 2010-06-29 08:04:37 EDT
Verified this defect with libvirt-0_8_1-11_el6, and PASSED.

# virsh save win2008-virtio /tmp/my1.save
Domain win2008-virtio saved to /tmp/my1.save

# virsh restore /tmp/my1.save
Domain restored from /tmp/my1.save

Guest is running happily.

# ll -Z /tmp/my1.save
-rw-------. root root system_u:object_r:virt_content_t:s0 /tmp/my1.save
Comment 10 xhu 2010-09-07 03:18:32 EDT
Verified this bug with RHEL6 RC build and it passed:
libvirt-0.8.1-27.el6.x86_64
qemu-kvm-0.12.1.2-2.113.el6.x86_64
kernel-2.6.32-70.el6.x86_64
Comment 11 releng-rhel@redhat.com 2010-11-11 09:50:30 EST
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.