Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionJaroslav Henner
2011-06-24 15:27:37 UTC
Description of problem:
Version-Release number of selected component (if applicable):
`--> ssh root.57.213 "yum info vdsm"
Loaded plugins: rhnplugin
Installed Packages
Name : vdsm
Arch : x86_64
Version : 4.9
Release : 75.el6
...
`-->
ssh root.57.206 "yum info vdsm"
Loaded plugins: product-id, rhnplugin, subscription-manager
Updating Red Hat repositories.
Installed Packages
Name : vdsm
Arch : x86_64
Version : 4.9
Release : 75.el6
...
How reproducible:
Very often, maybe always. Try another operation with the diskless VM. If cannot reproduce as stated below.
Steps to Reproduce:
1. Have a diskless VM on nfs in 3.0 datacenter
2. Suspend the VM
3. Stop the VM
Actual results:
* The vm cannot be stopped.
* It seems, as a result of this bug when automatic test continued, some nodes with not correct permission on the NFS domain appeared.
Expected results:
VM is stopped.
Additional info:
* An automatic tests breaker.
* I had a problem rebooting one of my hosts. Probably due to not properly terminated iSCSI connection.
* rhevm.log shows you the time when the relevant action happened:
2011-06-24 16:43:28,628 INFO [org.nogah.bll.ShutdownVmCommand] (pool-13-thread-2) Running command: ShutdownVmCommand internal: false. Entities affected : ID: 8bfc2913-91a6-4246-8874-62f386681b81 Type: VM
2011-06-24 16:43:28,673 INFO [org.nogah.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (pool-13-thread-2) START, DeleteImageGroupVDSCommand(storagePoolId = 66bf561c-9e5f-11e0-8393-001a4a013916, ignoreFailoverLimit = false, compatabilityVersion = 3.0, storageDomainId = 46b2bf31-ba83-4a22-aace-e73d5886a38e, imageGroupId = 2d866e7b-de43-4cb9-98ef-8d63b100da16, postZeros = false, forceDelete = false), log id: 2ff9f824
2011-06-24 16:43:28,707 ERROR [org.nogah.vdsbroker.vdsbroker.BrokerCommandBase] (pool-13-thread-2) Failed in DeleteImageGroupVDS method
2011-06-24 16:43:28,708 ERROR [org.nogah.vdsbroker.vdsbroker.BrokerCommandBase] (pool-13-thread-2) Error code VolumeDoesNotExist and error message IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS, error = Volume does not exist: ('18110ed1-49e2-46b6-b551-fe9ace6beb6a',)
(In reply to comment #2)
> I'm not sure this is a vdsm issue, and without vdsm or rhevm logs I cannot
> really tell.
For some reason, the attaching failed. Sorry. I attached it again.
Comment 16Michal Privoznik
2011-07-04 10:01:10 UTC
Oh, sorry I was not precise enough. I meant libvirtd logs. You can obtain them for example by setting:
log_level = 1
log_outputs="1:file:/var/log/libvirtd_debug.log"
Comment 22Michal Privoznik
2011-07-08 08:10:20 UTC
Pushed upstream:
commit 724819a10a92a8709f9276521a0cf27016b5c7b2
Author: Michal Privoznik <mprivozn>
Date: Thu Jul 7 17:33:15 2011 +0200
qemu: Don't chown files on NFS share if dynamic_ownership is off
When dynamic ownership is disabled we don't want to chown any files,
not just local.
v0.9.3-37-g724819a
It has been verified on libvirt-0.9.3-2.el6
1, setup a NFS server
/var/lib/libvirt/images *(rw,no_root_squash)
2, place a guest image file onto the NFS folder with permission root:root
3, set dynamic_ownership = 1, libvirt with privileged could start the guest.
when its value is set to 0, libvirt failed to start the guest with permission denied.
tested with libvirt-0.9.3-8.el6.x86_64
Steps to Verify:
1. Have a diskless VM on nfs in 3.0 RHEVM datacenter
2. Suspend the VM
3. Stop the VM
VM is stopped.
So keep the bug status as VERIFIED
again tested with libvirt-0.9.4-0rc1.el6.x86_64
Steps to Verify:
1. Have a diskless VM on nfs in 3.0 RHEVM datacenter
2. Suspend the VM
3. Stop the VM
VM is stopped.
So keep the bug status as VERIFIED
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2011-1513.html
Description of problem: Version-Release number of selected component (if applicable): `--> ssh root.57.213 "yum info vdsm" Loaded plugins: rhnplugin Installed Packages Name : vdsm Arch : x86_64 Version : 4.9 Release : 75.el6 ... `--> ssh root.57.206 "yum info vdsm" Loaded plugins: product-id, rhnplugin, subscription-manager Updating Red Hat repositories. Installed Packages Name : vdsm Arch : x86_64 Version : 4.9 Release : 75.el6 ... How reproducible: Very often, maybe always. Try another operation with the diskless VM. If cannot reproduce as stated below. Steps to Reproduce: 1. Have a diskless VM on nfs in 3.0 datacenter 2. Suspend the VM 3. Stop the VM Actual results: * The vm cannot be stopped. * It seems, as a result of this bug when automatic test continued, some nodes with not correct permission on the NFS domain appeared. Expected results: VM is stopped. Additional info: * An automatic tests breaker. * I had a problem rebooting one of my hosts. Probably due to not properly terminated iSCSI connection. * rhevm.log shows you the time when the relevant action happened: 2011-06-24 16:43:28,628 INFO [org.nogah.bll.ShutdownVmCommand] (pool-13-thread-2) Running command: ShutdownVmCommand internal: false. Entities affected : ID: 8bfc2913-91a6-4246-8874-62f386681b81 Type: VM 2011-06-24 16:43:28,673 INFO [org.nogah.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (pool-13-thread-2) START, DeleteImageGroupVDSCommand(storagePoolId = 66bf561c-9e5f-11e0-8393-001a4a013916, ignoreFailoverLimit = false, compatabilityVersion = 3.0, storageDomainId = 46b2bf31-ba83-4a22-aace-e73d5886a38e, imageGroupId = 2d866e7b-de43-4cb9-98ef-8d63b100da16, postZeros = false, forceDelete = false), log id: 2ff9f824 2011-06-24 16:43:28,707 ERROR [org.nogah.vdsbroker.vdsbroker.BrokerCommandBase] (pool-13-thread-2) Failed in DeleteImageGroupVDS method 2011-06-24 16:43:28,708 ERROR [org.nogah.vdsbroker.vdsbroker.BrokerCommandBase] (pool-13-thread-2) Error code VolumeDoesNotExist and error message IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS, error = Volume does not exist: ('18110ed1-49e2-46b6-b551-fe9ace6beb6a',)