RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 716478 - Libvirt changes volume permissions when migrating to file
Summary: Libvirt changes volume permissions when migrating to file
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: beta
: 6.2
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-06-24 15:27 UTC by Jaroslav Henner
Modified: 2011-12-06 11:15 UTC (History)
16 users (show)

Fixed In Version: libvirt-0.9.3-2.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-12-06 11:15:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (223.34 KB, application/x-bzip)
2011-06-25 11:28 UTC, Jaroslav Henner
no flags Details
libvirt.log (3.21 KB, application/x-bzip)
2011-07-04 09:08 UTC, Jaroslav Henner
no flags Details
libvirtd.log - diskless machine (25.48 KB, application/x-bzip)
2011-07-04 11:36 UTC, Jaroslav Henner
no flags Details
libvirtd.log - diskfull machine (24.82 KB, application/x-bzip)
2011-07-04 11:44 UTC, Jaroslav Henner
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:1513 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2011-12-06 01:23:30 UTC

Description Jaroslav Henner 2011-06-24 15:27:37 UTC
Description of problem:


Version-Release number of selected component (if applicable):
`--> ssh root.57.213 "yum info vdsm"                                     
Loaded plugins: rhnplugin
Installed Packages
Name        : vdsm
Arch        : x86_64
Version     : 4.9
Release     : 75.el6
...
`--> 

ssh root.57.206 "yum info vdsm" 
Loaded plugins: product-id, rhnplugin, subscription-manager
Updating Red Hat repositories.
Installed Packages
Name        : vdsm
Arch        : x86_64
Version     : 4.9
Release     : 75.el6
...


How reproducible:
Very often, maybe always. Try another operation with the diskless VM. If cannot reproduce as stated below.

Steps to Reproduce:
1. Have a diskless VM on nfs in 3.0 datacenter
2. Suspend the VM
3. Stop the VM
  
Actual results:
 * The vm cannot be stopped. 
 * It seems, as a result of this bug when automatic test continued, some nodes with not correct permission on the NFS domain appeared. 

Expected results:
VM is stopped.

Additional info:
 * An automatic tests breaker.
 * I had a problem rebooting one of my hosts. Probably due to not properly terminated iSCSI connection.

 * rhevm.log shows you the time when the relevant action happened:

2011-06-24 16:43:28,628 INFO  [org.nogah.bll.ShutdownVmCommand] (pool-13-thread-2) Running command: ShutdownVmCommand internal: false. Entities affected :  ID: 8bfc2913-91a6-4246-8874-62f386681b81 Type: VM
2011-06-24 16:43:28,673 INFO  [org.nogah.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (pool-13-thread-2) START, DeleteImageGroupVDSCommand(storagePoolId = 66bf561c-9e5f-11e0-8393-001a4a013916, ignoreFailoverLimit = false, compatabilityVersion = 3.0, storageDomainId = 46b2bf31-ba83-4a22-aace-e73d5886a38e, imageGroupId = 2d866e7b-de43-4cb9-98ef-8d63b100da16, postZeros = false, forceDelete = false), log id: 2ff9f824
2011-06-24 16:43:28,707 ERROR [org.nogah.vdsbroker.vdsbroker.BrokerCommandBase] (pool-13-thread-2) Failed in DeleteImageGroupVDS method
2011-06-24 16:43:28,708 ERROR [org.nogah.vdsbroker.vdsbroker.BrokerCommandBase] (pool-13-thread-2) Error code VolumeDoesNotExist and error message IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS, error = Volume does not exist: ('18110ed1-49e2-46b6-b551-fe9ace6beb6a',)

Comment 2 Dan Kenigsberg 2011-06-24 19:00:46 UTC
I'm not sure this is a vdsm issue, and without vdsm or rhevm logs I cannot really tell.

Comment 3 Jaroslav Henner 2011-06-25 11:28:36 UTC
Created attachment 509891 [details]
logs

Comment 4 Jaroslav Henner 2011-06-25 11:30:02 UTC
(In reply to comment #2)
> I'm not sure this is a vdsm issue, and without vdsm or rhevm logs I cannot
> really tell.

For some reason, the attaching failed. Sorry. I attached it again.

Comment 15 Jaroslav Henner 2011-07-04 09:08:22 UTC
Created attachment 511152 [details]
libvirt.log

Comment 16 Michal Privoznik 2011-07-04 10:01:10 UTC
Oh, sorry I was not precise enough. I meant libvirtd logs. You can obtain them for example by setting:

log_level = 1
log_outputs="1:file:/var/log/libvirtd_debug.log"

Comment 17 Jaroslav Henner 2011-07-04 11:36:50 UTC
Created attachment 511173 [details]
libvirtd.log - diskless machine

Comment 18 Jaroslav Henner 2011-07-04 11:44:14 UTC
Created attachment 511176 [details]
libvirtd.log - diskfull machine

Comment 19 Jaroslav Henner 2011-07-04 11:45:21 UTC
Changing to high severity, since the diskfull machine also doesn't work fine.

Comment 20 Dan Kenigsberg 2011-07-04 15:12:29 UTC
Might be a dup of the reopened bug 707257, this is blocking our automatic tests.

Comment 21 Michal Privoznik 2011-07-07 15:41:56 UTC
Problem is, we explicitly do chown after open for network filesystems. I've sent a patch:

https://www.redhat.com/archives/libvir-list/2011-July/msg00357.html

So, when dynamic_ownership is turned off, then no chown should be done no matter of filesystem.

Comment 22 Michal Privoznik 2011-07-08 08:10:20 UTC
Pushed upstream:

commit 724819a10a92a8709f9276521a0cf27016b5c7b2
Author: Michal Privoznik <mprivozn>
Date:   Thu Jul 7 17:33:15 2011 +0200

    qemu: Don't chown files on NFS share if dynamic_ownership is off
    
    When dynamic ownership is disabled we don't want to chown any files,
    not just local.

v0.9.3-37-g724819a

Comment 24 Gunannan Ren 2011-07-12 09:34:31 UTC
It has been verified on libvirt-0.9.3-2.el6

1, setup a NFS server
/var/lib/libvirt/images  *(rw,no_root_squash)

2, place a guest image file onto the NFS folder with permission root:root

3, set dynamic_ownership = 1, libvirt with privileged could start the guest. 
when its value is set to 0, libvirt failed to start the guest with permission denied.

Comment 25 Gunannan Ren 2011-07-13 09:11:00 UTC
I verified this bug in my way, but I am not sure about if the bug is ok for your case, could you please help verify it in your testing environment?

Comment 26 Jaroslav Henner 2011-07-14 06:54:09 UTC
VERIFIED with our automatic tests.

Comment 27 Vivian Bian 2011-07-29 07:45:33 UTC
tested with libvirt-0.9.3-8.el6.x86_64 

Steps to Verify:
1. Have a diskless VM on nfs in 3.0 RHEVM datacenter
2. Suspend the VM
3. Stop the VM

VM is stopped.

So keep the bug status as VERIFIED

Comment 28 Vivian Bian 2011-07-29 08:45:26 UTC
again tested with libvirt-0.9.4-0rc1.el6.x86_64
Steps to Verify:
1. Have a diskless VM on nfs in 3.0 RHEVM datacenter
2. Suspend the VM
3. Stop the VM

VM is stopped.

So keep the bug status as VERIFIED

Comment 29 errata-xmlrpc 2011-12-06 11:15:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1513.html


Note You need to log in before you can comment on or make changes to this bug.