| Summary: | Libvirt changes volume permissions when migrating to file | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Jaroslav Henner <jhenner> | ||||||||||
| Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> | ||||||||||
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||||||
| Severity: | urgent | Docs Contact: | |||||||||||
| Priority: | unspecified | ||||||||||||
| Version: | 6.1 | CC: | abaron, bazulay, cpelland, dallan, danken, dnaori, dyuan, fsimonce, gren, iheim, mzhan, nzhang, rwu, vbian, veillard, ykaul | ||||||||||
| Target Milestone: | beta | Keywords: | Regression, TestBlocker | ||||||||||
| Target Release: | 6.2 | ||||||||||||
| Hardware: | x86_64 | ||||||||||||
| OS: | Linux | ||||||||||||
| Whiteboard: | |||||||||||||
| Fixed In Version: | libvirt-0.9.3-2.el6 | Doc Type: | Bug Fix | ||||||||||
| Doc Text: | Story Points: | --- | |||||||||||
| Clone Of: | Environment: | ||||||||||||
| Last Closed: | 2011-12-06 11:15:46 UTC | Type: | --- | ||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||
| Documentation: | --- | CRM: | |||||||||||
| Verified Versions: | Category: | --- | |||||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||
| Attachments: |
|
||||||||||||
I'm not sure this is a vdsm issue, and without vdsm or rhevm logs I cannot really tell. Created attachment 509891 [details]
logs
(In reply to comment #2) > I'm not sure this is a vdsm issue, and without vdsm or rhevm logs I cannot > really tell. For some reason, the attaching failed. Sorry. I attached it again. Created attachment 511152 [details]
libvirt.log
Oh, sorry I was not precise enough. I meant libvirtd logs. You can obtain them for example by setting: log_level = 1 log_outputs="1:file:/var/log/libvirtd_debug.log" Created attachment 511173 [details]
libvirtd.log - diskless machine
Created attachment 511176 [details]
libvirtd.log - diskfull machine
Changing to high severity, since the diskfull machine also doesn't work fine. Might be a dup of the reopened bug 707257, this is blocking our automatic tests. Problem is, we explicitly do chown after open for network filesystems. I've sent a patch: https://www.redhat.com/archives/libvir-list/2011-July/msg00357.html So, when dynamic_ownership is turned off, then no chown should be done no matter of filesystem. Pushed upstream:
commit 724819a10a92a8709f9276521a0cf27016b5c7b2
Author: Michal Privoznik <mprivozn>
Date: Thu Jul 7 17:33:15 2011 +0200
qemu: Don't chown files on NFS share if dynamic_ownership is off
When dynamic ownership is disabled we don't want to chown any files,
not just local.
v0.9.3-37-g724819a
It has been verified on libvirt-0.9.3-2.el6 1, setup a NFS server /var/lib/libvirt/images *(rw,no_root_squash) 2, place a guest image file onto the NFS folder with permission root:root 3, set dynamic_ownership = 1, libvirt with privileged could start the guest. when its value is set to 0, libvirt failed to start the guest with permission denied. I verified this bug in my way, but I am not sure about if the bug is ok for your case, could you please help verify it in your testing environment? VERIFIED with our automatic tests. tested with libvirt-0.9.3-8.el6.x86_64 Steps to Verify: 1. Have a diskless VM on nfs in 3.0 RHEVM datacenter 2. Suspend the VM 3. Stop the VM VM is stopped. So keep the bug status as VERIFIED again tested with libvirt-0.9.4-0rc1.el6.x86_64 Steps to Verify: 1. Have a diskless VM on nfs in 3.0 RHEVM datacenter 2. Suspend the VM 3. Stop the VM VM is stopped. So keep the bug status as VERIFIED Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2011-1513.html |
Description of problem: Version-Release number of selected component (if applicable): `--> ssh root.57.213 "yum info vdsm" Loaded plugins: rhnplugin Installed Packages Name : vdsm Arch : x86_64 Version : 4.9 Release : 75.el6 ... `--> ssh root.57.206 "yum info vdsm" Loaded plugins: product-id, rhnplugin, subscription-manager Updating Red Hat repositories. Installed Packages Name : vdsm Arch : x86_64 Version : 4.9 Release : 75.el6 ... How reproducible: Very often, maybe always. Try another operation with the diskless VM. If cannot reproduce as stated below. Steps to Reproduce: 1. Have a diskless VM on nfs in 3.0 datacenter 2. Suspend the VM 3. Stop the VM Actual results: * The vm cannot be stopped. * It seems, as a result of this bug when automatic test continued, some nodes with not correct permission on the NFS domain appeared. Expected results: VM is stopped. Additional info: * An automatic tests breaker. * I had a problem rebooting one of my hosts. Probably due to not properly terminated iSCSI connection. * rhevm.log shows you the time when the relevant action happened: 2011-06-24 16:43:28,628 INFO [org.nogah.bll.ShutdownVmCommand] (pool-13-thread-2) Running command: ShutdownVmCommand internal: false. Entities affected : ID: 8bfc2913-91a6-4246-8874-62f386681b81 Type: VM 2011-06-24 16:43:28,673 INFO [org.nogah.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (pool-13-thread-2) START, DeleteImageGroupVDSCommand(storagePoolId = 66bf561c-9e5f-11e0-8393-001a4a013916, ignoreFailoverLimit = false, compatabilityVersion = 3.0, storageDomainId = 46b2bf31-ba83-4a22-aace-e73d5886a38e, imageGroupId = 2d866e7b-de43-4cb9-98ef-8d63b100da16, postZeros = false, forceDelete = false), log id: 2ff9f824 2011-06-24 16:43:28,707 ERROR [org.nogah.vdsbroker.vdsbroker.BrokerCommandBase] (pool-13-thread-2) Failed in DeleteImageGroupVDS method 2011-06-24 16:43:28,708 ERROR [org.nogah.vdsbroker.vdsbroker.BrokerCommandBase] (pool-13-thread-2) Error code VolumeDoesNotExist and error message IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS, error = Volume does not exist: ('18110ed1-49e2-46b6-b551-fe9ace6beb6a',)