RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1435064 - Error prompt when try to clone a uefi guest again after stop&delete "vram" pool firstly
Summary: Error prompt when try to clone a uefi guest again after stop&delete "vram" ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virt-manager
Version: 7.4
Hardware: x86_64
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Pavel Hrdina
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-23 03:17 UTC by zhoujunqin
Modified: 2017-08-01 21:04 UTC (History)
6 users (show)

Fixed In Version: virt-manager-1.4.1-5.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 21:04:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2072 0 normal SHIPPED_LIVE virt-manager bug fix and enhancement update 2017-08-01 18:36:34 UTC

Description zhoujunqin 2017-03-23 03:17:03 UTC
Description of problem:
Error prompt  when try to clone a uefi guest  after stop&delete "vram" pool firstly
"Error creating virtual machine clone 'rhel7.3-uefi-clone': Could not define storage pool: operation failed: Storage source conflict with pool: 'nvram'"

Version-Release number of selected component (if applicable):
virt-manager-1.4.1-1.el7.noarch
libvirt-3.1.0-2.el7.x86_64
qemu-kvm-rhev-2.8.0-6.el7.x86_64
libvirt-python-3.1.0-1.el7.x86_64


How reproducible:
100%

Steps to Reproduce:
1. Prepare a uefi guest on virt-manager with configuration:

# virsh dumpxml rhel7.3-uefi
  <domain type='kvm'>
  <name>rhel7.3-uefi</name>
...
  <os>
    <type arch='x86_64' machine='pc-q35-rhel7.4.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>

...
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/rhel7.3-2.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </disk>
...

2. No pool "nvram" exists.
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 boot-scratch         active     yes       
 default              active     yes       
 Downloads            active     yes       
 root                 active     yes  

3. Launch virt-manager ->select guest->right click->Clone-->"Clone virtual machine" pop up-->click "Clone" button.
New guest name: rhel7.3-uefi-clone

4. After clone process finished, check:
4.1 Check new guest xml file configuration for nvram.
# virsh dumpxml rhel7.3-uefi-clone 
...
  <os>
    <type arch='x86_64' machine='pc-q35-rhel7.4.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>

Result: 
New nvram file generated while cloning: rhel7.3-uefi-clone_VARS.fd

4.2 Start new cloned guest: rhel7.3-uefi-clone
Result: 
Guest starts successfully.

4.3 Check new dir-pool "nvram" newly created.
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 boot-scratch         active     yes       
 default              active     yes       
 Downloads            active     yes       
 nvram                active     yes   ->newly added      
 root                 active     yes  

# virsh pool-dumpxml nvram
<pool type='dir'>
  <name>nvram</name>
  <uuid>86ed9e85-f321-4586-8c59-d1581c8e2595</uuid>
  <capacity unit='bytes'>211243999232</capacity>
  <allocation unit='bytes'>87902429184</allocation>
  <available unit='bytes'>123341570048</available>
  <source>
  </source>
  <target>
    <path>/var/lib/libvirt/qemu/nvram</path>
    <permissions>
      <mode>0755</mode>
      <owner>107</owner>
      <group>107</group>
      <label>system_u:object_r:qemu_var_run_t:s0</label>
    </permissions>
  </target>
</pool>

# virsh vol-list --pool nvram
 Name                 Path                                    
------------------------------------------------------------------------------
 rhel7.3-uefi-clone_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd
 rhel7.3-uefi_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd

5. Delete new cloned guest: rhel7.3-uefi-clone, then refresh pool "nvram".
Result: 
Guest deletes successfully, and volume "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd" deleted together.

6. Start original guest "rhel7.3-uefi" again.
Result: 
Guest "rhel7.3-uefi" successfully.

7. Stop original guest "rhel7.3-uefi", delete vram file "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd" manually, then start again.
 # virsh vol-list --pool nvram
 Name                 Path                                    
------------------------------------------------------------------------------

Result: 
I. Guest starts successfully.
II. New file "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd" generated while guest starting.

# virsh vol-list --pool nvram
 Name                 Path                                    
------------------------------------------------------------------------------
 rhel7.3-uefi_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd

8. Stop original guest "rhel7.3-uefi", then stop&delete pool "nvram", clone again.

Result:
I. A error window pops up:

Error creating virtual machine clone 'rhel7.3-uefi-clone': Could not define storage pool: operation failed: Storage source conflict with pool: 'nvram'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/clone.py", line 859, in _async_clone
    self.clone_design.setup()
  File "/usr/share/virt-manager/virtinst/cloner.py", line 465, in setup
    self.setup_clone()
  File "/usr/share/virt-manager/virtinst/cloner.py", line 453, in setup_clone
    self._prepare_nvram()
  File "/usr/share/virt-manager/virtinst/cloner.py", line 373, in _prepare_nvram
    nvram.path = self.clone_nvram
  File "/usr/share/virt-manager/virtinst/devicedisk.py", line 508, in _set_path
    (vol_object, parent_pool) = diskbackend.manage_path(self.conn, newpath)
  File "/usr/share/virt-manager/virtinst/diskbackend.py", line 177, in manage_path
    pool = poolxml.install(build=False, create=True, autostart=True)
  File "/usr/share/virt-manager/virtinst/storage.py", line 533, in install
    raise RuntimeError(_("Could not define storage pool: %s") % str(e))
RuntimeError: Could not define storage pool: operation failed: Storage source conflict with pool: 'nvram'

II. Then i checked "nvram" is existing even though virt-clone process hasn't start successfully.
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 boot-scratch         active     yes       
 default              active     yes       
 Downloads            active     yes       
 nvram                active     yes       
 root                 active     yes   

Actual results:
As description.

Expected results:
Fix it.

Additional info:

Comment 2 Cole Robinson 2017-03-24 20:51:43 UTC
I hit an error similar to this with the UEFI rename patches but was fixed before the release. Basically trying to do a rename if nvram pool didn't exist would fail the first time, but every subsequent time would succeed. See the commit message below for an explanation, maybe this error is something similar:

commit f61e586b7703d8cade5299d6c571e7a31b6dfab7
Author: Cole Robinson <crobinso>
Date:   Wed Mar 8 14:20:41 2017 -0500

    domain: rename: Fix when nvram pool is newly created
    
    We don't have any way at the momemnt to synchronously update cached
    object lists. So if old_nvram will create a pool for the nvram dir
    (/var/lib/libvirt/qemu/nvram), new_nvram won't see that new object
    in our cache, will attempt to create it itself, and raise an error.
    Next attempts succeed though.
    
    We can avoid this by not even setting new_nvram.path, that step was
    redundant anyways since we are setting a vol_install right afterwards.
    This way, new_nvram is getting a reference to the parent_pool object
    via the vol_install, so it doesn't even check the pool object cache.

Comment 3 Pavel Hrdina 2017-05-23 07:29:55 UTC
Upstream commit:

commit 168651188674f35ce4afd8b3c0bac1a6be2317c0
Author: Pavel Hrdina <phrdina>
Date:   Fri May 19 14:26:49 2017 +0200

    virtManager.connection: introduce cb_add_new_pool

Comment 6 zhoujunqin 2017-05-26 06:06:26 UTC
Try to verify this bug with new build:
virt-manager-1.4.1-5.el7.noarch
libvirt-3.2.0-6.virtcov.el7.x86_64
qemu-kvm-rhev-2.9.0-6.el7.x86_64

Steps:
1. Prepare a uefi guest on virt-manager with configuration:
# virsh dumpxml rhel7.24ovmf
  <os>
    <type arch='x86_64' machine='pc-q35-rhel7.4.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/rhel7.24ovmf_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/rhel7.24ovmf.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>

2. No pool "nvram" exists.
# virsh pool-list --all  |grep nvram

3.Launch virt-manager ->select guest->right click->Clone-->"Clone virtual machine" pop up-->click "Clone" button.
New guest name: rhel7.24ovmf-clone

4. After clone process finished, check:
4.1 Check new guest xml file configuration for nvram.
# virsh dumpxml rhel7.24ovmf-clone
...
  <os>
    <type arch='x86_64' machine='pc-q35-rhel7.4.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/rhel7.24ovmf-clone_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>
..

Result: 
New nvram file generated while cloning: rhel7.24ovmf-clone_VARS.fd

4.2 Start new cloned guest: rhel7.24ovmf-clone
Result: 
Guest starts successfully.

4.3 Check new dir-pool "nvram" newly created.
# virsh pool-list --all |grep nvram
 nvram                active     yes       


# virsh pool-dumpxml nvram
<pool type='dir'>
  <name>nvram</name>
  <uuid>dd6a50eb-f112-4018-892a-12b5305d2d83</uuid>
  <capacity unit='bytes'>268304384000</capacity>
  <allocation unit='bytes'>41883357184</allocation>
  <available unit='bytes'>226421026816</available>
  <source>
  </source>
  <target>
    <path>/var/lib/libvirt/qemu/nvram</path>
    <permissions>
      <mode>0755</mode>
      <owner>107</owner>
      <group>107</group>
      <label>system_u:object_r:qemu_var_run_t:s0</label>
    </permissions>
  </target>
</pool>


# virsh vol-list --pool nvram
 Name                 Path                                    
------------------------------------------------------------------------------
 rhel7.24ovmf-clone_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.24ovmf-clone_VARS.fd
 rhel7.24ovmf_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.24ovmf_VARS.fd


5. Delete new cloned guest: rhel7.24ovmf-clone, then refresh pool "nvram".
Result: 
Guest deletes successfully, and volume "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd" deleted together.


6. Start original guest "rhel7.24ovmf" again.
Result: 
Guest "rhel7.24ovmf" successfully.

7. Stop original guest "rhel7.3-uefi", delete vram file "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd" manually, then start again.
 # virsh vol-list --pool nvram
 Name                 Path                                    
------------------------------------------------------------------------------

Result: 
I. Guest starts successfully.
II. New file "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd" generated while guest starting.

# virsh vol-list --pool nvram
 Name                 Path                                    
------------------------------------------------------------------------------
 rhel7.3-uefi_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd

8. Stop original guest "rhel7.24ovmf", then stop&delete pool "nvram", clone again.

Result:

a. Clone operation started without error and new guest can cloned successfully.
b. New pool 'nvram' will generate automatically.
c. Both original and new guest can start successfully.

So move this bug from ON_QA to VERIFIED.

Comment 7 Cole Robinson 2017-07-19 15:38:50 UTC
The fix for this causes some issues. I've opened a separate bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1472894

I think we should revert this patch and fix this issue in another way.

I can easily reproduce the issue mentioned here by applying this patch:

diff --git a/virtManager/connection.py b/virtManager/connection.py
index 04a084c6..c2075a7b 100644
--- a/virtManager/connection.py
+++ b/virtManager/connection.py
@@ -287,10 +287,12 @@ class vmmConnection(vmmGObject):
         self._backend.cb_fetch_all_vols = fetch_all_vols
 
         def add_new_pool(obj, key):
+            return
             self._new_object_cb(vmmStoragePool(self, obj, key), False, True)
         self._backend.cb_add_new_pool = add_new_pool
 
         def clear_cache(pools=False):
+            return
             if not pools:
                 return
 

* Remove and undefine the 'nvram' pool
* start virt-manager
* clone a UEFI guest
* will fail with RuntimeError: Could not define storage pool: operation failed: Storage source conflict with pool: 'nvram'

However we can side step this issue with the following patch:

diff --git a/virtManager/clone.py b/virtManager/clone.py
index 4728d326..8b8ce60d 100644
--- a/virtManager/clone.py
+++ b/virtManager/clone.py
@@ -852,7 +852,6 @@ class vmmCloneVM(vmmGObjectUI):
                 if poolname not in refresh_pools:
                     refresh_pools.append(poolname)
 
-            self.clone_design.setup()
             self.clone_design.start_duplicate(meter)
 
             for poolname in refresh_pools:

.setup() just redoes some of the bits we already ran in the validate() function which runs right before this, so it's redundant AFAICT. Removing this setup() call avoids the issue here

Comment 8 Cole Robinson 2017-07-20 21:58:39 UTC
That cloner change is upstream now:

commit d3074141c8b9186c7881d4e61ce6795b935ec08b
Author: Cole Robinson <crobinso>
Date:   Thu Jul 20 17:18:14 2017 -0400

    cloner: Remove redundant setup() method
    
    The functional callers use the individual setup methods, let's drop the
    helper function and adjust the test suite

Comment 9 errata-xmlrpc 2017-08-01 21:04:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2072


Note You need to log in before you can comment on or make changes to this bug.