Bug 492728
Summary: | F10: kvm-74-10: after upgrade no VM's will start | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Gerry Reno <greno> |
Component: | libvirt | Assignee: | Daniel Veillard <veillard> |
Status: | CLOSED NOTABUG | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | urgent | Docs Contact: | |
Priority: | low | ||
Version: | 10 | CC: | berrange, clalance, crobinso, ehabkost, gcosta, markmc, quintela, veillard, virt-maint |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2009-03-30 09:22:53 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Attachments: |
Created attachment 337130 [details]
screenshot-2 of sample VM having boot issues after upgrade
The VM's are all vmdk VM's that have been running for over a year without problem under the old kvm-65. In another of the VM's it only gets to where the filesystem /boot is referenced and then it is completely locked up right there. The VM's are vmdk VM's that were originally created using VMWare Server tools. They are growable files NOT broken into 2GB segments. Update: I just rechecked and the first VM (shown in the attachment screenshots) is a 'raw' image. I ran 'qemu-img info ...' to verify. The other VM is a vmdk growable. Created attachment 337135 [details]
screenshot-3 of a second VM having boot issues after upgrade
Screenshot shows problem with 'vmdk' VM startup after kvm upgrade to 74-10.
I think the problem is related to SELinux which we run on all our systems. Here is one of the alerts when you start the VM's. ======================================================================= Summary: SELinux is preventing qemu-kvm (qemu_t) "write" to ./MX_1-0.vmdk (var_t). Detailed Description: SELinux denied access requested by qemu-kvm. It is not expected that this access is required by qemu-kvm and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for ./MX_1-0.vmdk, restorecon -v './MX_1-0.vmdk' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Additional Information: Source Context system_u:system_r:qemu_t:s0 Target Context system_u:object_r:var_t:s0 Target Objects ./MX_1-0.vmdk [ file ] Source qemu-kvm Source Path /usr/bin/qemu-kvm Port <Unknown> Host grp-01-10-01 Source RPM Packages kvm-74-10.fc10 Target RPM Packages Policy RPM selinux-policy-3.5.13-49.fc10 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name grp-01-10-01 Platform Linux grp-01-10-01 2.6.27.19-170.2.35.fc10.i686.PAE #1 SMP Mon Feb 23 13:09:26 EST 2009 i686 athlon Alert Count 6 First Seen Thu 08 May 2008 12:20:56 PM EDT Last Seen Sat 28 Mar 2009 09:48:57 PM EDT Local ID e1adef63-a2c9-4e8e-a834-14cc9df77259 Line Numbers Raw Audit Messages node=grp-01-10-01 type=AVC msg=audit(1238291337.87:75): avc: denied { write } for pid=7898 comm="qemu-kvm" name="MX_1-0.vmdk" dev=dm-0 ino=10232324 scontext=system_u:system_r:qemu_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=file node=grp-01-10-01 type=SYSCALL msg=audit(1238291337.87:75): arch=40000003 syscall=5 success=no exit=-13 a0=bfec5c80 a1=8002 a2=0 a3=8002 items=0 ppid=2930 pid=7898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/bin/qemu-kvm" subj=system_u:system_r:qemu_t:s0 key=(null) ======================================================================= I tried a couple things to restore the context: restorecon -v MX_1-0.vmdk and /sbin/fixfiles restore MX_1-0.vmdk neither of these seemed to make any difference. Still got the same error. Any suggestions as to how to give kvm access to the images? And I just tried: /sbin/fixfiles relabel ... And that didn't seem to work either. It would seem to me that when kvm is updated, if there has been some change in selinux contexts then the rpm should run a postinstall that changes the contexts for all the VM's. Or provides a script for the user to run on their VM directories that does it. I was able to get the VM's running with this selinux command: chcon -t virt_image_t PATH_TO_IMAGE_FILE But, this setting is temporary. Shouldn't this type of thing be in the fedora selinux-policy package? I think what happened is that a long time ago we setup the VM's and probably used chcon to put the type on them and that with the new upgrade those attributes were all lost. Thanks for the report (In reply to comment #9) > I was able to get the VM's running with this selinux command: > > chcon -t virt_image_t PATH_TO_IMAGE_FILE Yep, all image files need to be labelled with virt_image_t > But, this setting is temporary. Shouldn't this type of thing be in the fedora > selinux-policy package? The label is automatically applied if the file is in /var/lib/libvirt/images What was the full path for the image files? > I think what happened is that a long time ago we setup the VM's and probably > used chcon to put the type on them and that with the new upgrade those > attributes were all lost. This would be a bug if it happened during the upgrade, but it was probably running fixfiles that changed the label. Closing as NOTABUG for now - please do re-open if you can provide any more details of the re-labelling happening during the upgrade. |
Created attachment 337129 [details] screenshot-1 of sample VM having boot issues after upgrade. Description of problem: Upgraded system from F9 to F10 then did a 'yum update'. Everything completed normally and then we rebooted. Upon bringing up virtmanager I could see that none of our VM's were able to start. Rebooted system and tried again and same result. Version-Release number of selected component (if applicable): kvm-74-10 How reproducible: always Steps to Reproduce: 1. have existing F9 system with kvm-65 and some vmdk VM's. 2. upgrade F9 system to F10 3. launch virtmanager and you see VM's unable to start. Actual results: Expected results: Additional info: