Bug 1243335

Summary: copy and rename nvram file while cloning the whole guest
Product: Red Hat Enterprise Linux 7 Reporter: zhoujunqin <juzhou>
Component: virt-managerAssignee: Pavel Hrdina <phrdina>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.2CC: abologna, eric.auger, lersek, mzhan, phrdina, sherold, tzheng, xiaodwan
Target Milestone: rcKeywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: virt-manager-1.4.1-1.el7 Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-01 21:02:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1212027, 1288337    
Attachments:
Description Flags
virt-manager debug log none

Description zhoujunqin 2015-07-15 09:00:45 UTC
Created attachment 1052274 [details]
virt-manager debug log

Description of problem:
virt-manager failed to create new file to guest when cloning

Version-Release number of selected component (if applicable):
virt-manager-1.2.1-2.el7.noarch
virt-install-1.2.1-2.el7.noarch
libvirt-1.2.17-2.el7.x86_64
qemu-kvm-rhev-2.3.0-9.el7.x86_64
OVMF-20150414-2.gitc9e5618.el7.noarch

How reproducible:
100%

Steps to Reproduce:
1. Launch virt-manager, then create a new guest with using UEFI firmware.
   Guest name is ovmf.

2. After guest finish installation, shutdown guest and clone.
   (choose guest-->right click-->Clone-->"Clone virtual machine" pop up-->click clone button)
   guest name is ovmf-clone

3. When clone finished, delete new guest "ovmf-clone".
   (choose guest "ovmf-clone"-->right click-->Delete)

4. Start guest "ovmf" again.

Actual results:
After step4, failed to start guest "ovmf" with error:

Error starting domain: unable to set user and group to '107:107' on '/var/lib/libvirt/qemu/nvram/ovmf_VARS.fd': No such file or directory

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 89, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 125, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in newfn
    ret = fn(self, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1433, in startup
    self._backend.create()
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1029, in create
    if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: unable to set user and group to '107:107' on '/var/lib/libvirt/qemu/nvram/ovmf_VARS.fd': No such file or directory


Expected results:
After delete cloned guest "ovmf-clone", original guest "ovmf" can boot successfully.

Additional info:
I guess the root reason for this issue, when cloning guest "ovmf-clone", it doesn't generate new nvram file, such as /var/lib/libvirt/qemu/nvram/ovmf-clone_VARS.fd, the guest "ovmf-clone" and "ovmf" using same file "/var/lib/libvirt/qemu/nvram/ovmf_VARS.fd", then when delete guest "ovmf-clone", this file is been deleted together.

Comment 4 XiaoQing 2015-07-30 10:15:14 UTC
Misoperation.
Sorry.

Comment 6 Andrea Bolognani 2016-05-25 12:30:21 UTC
Unlike x86_64 guests, aarch64 guests always boot using UEFI.
So this bug will affect aarch64 in a big way.

Comment 8 Pavel Hrdina 2017-03-07 12:34:47 UTC
Upstream commit:

commit 5e2b63c1ffa453f8664a2993ee6d8bec18294d50
Author: Pavel Hrdina <phrdina>
Date:   Mon Mar 6 09:43:10 2017 +0100

    virt-clone: add support to clone nvram VARS

Comment 9 Cole Robinson 2017-03-07 16:02:14 UTC
*** Bug 1372313 has been marked as a duplicate of this bug. ***

Comment 11 zhoujunqin 2017-03-17 09:43:01 UTC
Try to verify this bug with new build:
virt-install-1.4.1-1.el7.noarch
virt-manager-1.4.1-1.el7.noarch
virt-manager-common-1.4.1-1.el7.noarch
libvirt-3.1.0-2.el7.x86_64

Steps:
1. Prepare a uefi guest on virt-manager with configuration:

# virsh dumpxml rhel7.3-uefi
  <domain type='kvm'>
  <name>rhel7.3-uefi</name>
...
  <os>
    <type arch='x86_64' machine='pc-q35-rhel7.4.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>

...
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/rhel7.3-2.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </disk>
...

2. No pool "nvram" exists.
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 boot-scratch         active     yes       
 default              active     yes       
 Downloads            active     yes       
 root                 active     yes  

3. Launch virt-manager ->select guest->right click->Clone-->"Clone virtual machine" pop up-->click "Clone" button.
New guest name: rhel7.3-uefi-clone

4. After clone process finished, check:
4.1 Check new guest xml file configuration for nvram.
# virsh dumpxml rhel7.3-uefi-clone 
...
  <os>
    <type arch='x86_64' machine='pc-q35-rhel7.4.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>

Result: 
New nvram file generated while cloning: rhel7.3-uefi-clone_VARS.fd

4.2 Start new cloned guest: rhel7.3-uefi-clone
Result: 
Guest starts successfully.

4.3 Check new dir-pool "nvram" newly created.
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 boot-scratch         active     yes       
 default              active     yes       
 Downloads            active     yes       
 nvram                active     yes   ->newly added      
 root                 active     yes  

# virsh pool-dumpxml nvram
<pool type='dir'>
  <name>nvram</name>
  <uuid>86ed9e85-f321-4586-8c59-d1581c8e2595</uuid>
  <capacity unit='bytes'>211243999232</capacity>
  <allocation unit='bytes'>87902429184</allocation>
  <available unit='bytes'>123341570048</available>
  <source>
  </source>
  <target>
    <path>/var/lib/libvirt/qemu/nvram</path>
    <permissions>
      <mode>0755</mode>
      <owner>107</owner>
      <group>107</group>
      <label>system_u:object_r:qemu_var_run_t:s0</label>
    </permissions>
  </target>
</pool>

# virsh vol-list --pool nvram
 Name                 Path                                    
------------------------------------------------------------------------------
 rhel7.3-uefi-clone_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd
 rhel7.3-uefi_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd

5. Delete new cloned guest: rhel7.3-uefi-clone, then refresh pool "nvram".
Result: 
Guest deletes successfully, and volume "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd" deleted together.

6. Start original guest "rhel7.3-uefi" again.
Result: 
Guest "rhel7.3-uefi" successfully.

7. Stop original guest "rhel7.3-uefi", delete vram file "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd" manually, then start again.
 # virsh vol-list --pool nvram
 Name                 Path                                    
------------------------------------------------------------------------------

Result: 
I. Guest starts successfully.
II. New file "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd" generated while guest starting.

# virsh vol-list --pool nvram
 Name                 Path                                    
------------------------------------------------------------------------------
 rhel7.3-uefi_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd

8. Stop original guest "rhel7.3-uefi", then stop&delete pool "nvram", clone again.

Result:
I. A error window pops up:
Error creating virtual machine clone 'rhel7.3-uefi-clone': Could not define storage pool: operation failed: Storage source conflict with pool: 'nvram'
II. Then i checked "nvram" is existing even though virt-clone process hasn't start successfully.
# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 boot-scratch         active     yes       
 default              active     yes       
 Downloads            active     yes       
 nvram                active     yes       
 root                 active     yes   

Hi Pavel, 
Please help me have a look of above verifying steps, i have two questions want to confirm with you.   
I. If nvram file can generated while starting guest automatically, it seems no meaning to generated a new file for the new cloned guest.
II. Which period pool "nvram" generates, if reported error in step8, is it correct to generate pool, thanks.

Comment 12 Pavel Hrdina 2017-03-22 13:10:13 UTC
(In reply to zhoujunqin from comment #11)
> Try to verify this bug with new build:
> virt-install-1.4.1-1.el7.noarch
> virt-manager-1.4.1-1.el7.noarch
> virt-manager-common-1.4.1-1.el7.noarch
> libvirt-3.1.0-2.el7.x86_64
> 
> Steps:
> 1. Prepare a uefi guest on virt-manager with configuration:
> 
> # virsh dumpxml rhel7.3-uefi
>   <domain type='kvm'>
>   <name>rhel7.3-uefi</name>
> ...
>   <os>
>     <type arch='x86_64' machine='pc-q35-rhel7.4.0'>hvm</type>
>     <loader readonly='yes'
> type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
>     <nvram>/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd</nvram>
>     <boot dev='hd'/>
>   </os>
> 
> ...
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='raw'/>
>       <source file='/var/lib/libvirt/images/rhel7.3-2.img'/>
>       <target dev='vda' bus='virtio'/>
>       <address type='pci' domain='0x0000' bus='0x03' slot='0x00'
> function='0x0'/>
>     </disk>
> ...
> 
> 2. No pool "nvram" exists.
> # virsh pool-list --all
>  Name                 State      Autostart 
> -------------------------------------------
>  boot-scratch         active     yes       
>  default              active     yes       
>  Downloads            active     yes       
>  root                 active     yes  
> 
> 3. Launch virt-manager ->select guest->right click->Clone-->"Clone virtual
> machine" pop up-->click "Clone" button.
> New guest name: rhel7.3-uefi-clone
> 
> 4. After clone process finished, check:
> 4.1 Check new guest xml file configuration for nvram.
> # virsh dumpxml rhel7.3-uefi-clone 
> ...
>   <os>
>     <type arch='x86_64' machine='pc-q35-rhel7.4.0'>hvm</type>
>     <loader readonly='yes'
> type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
>     <nvram>/var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd</nvram>
>     <boot dev='hd'/>
>   </os>
> 
> Result: 
> New nvram file generated while cloning: rhel7.3-uefi-clone_VARS.fd
> 
> 4.2 Start new cloned guest: rhel7.3-uefi-clone
> Result: 
> Guest starts successfully.
> 
> 4.3 Check new dir-pool "nvram" newly created.
> # virsh pool-list --all
>  Name                 State      Autostart 
> -------------------------------------------
>  boot-scratch         active     yes       
>  default              active     yes       
>  Downloads            active     yes       
>  nvram                active     yes   ->newly added      
>  root                 active     yes  
> 
> # virsh pool-dumpxml nvram
> <pool type='dir'>
>   <name>nvram</name>
>   <uuid>86ed9e85-f321-4586-8c59-d1581c8e2595</uuid>
>   <capacity unit='bytes'>211243999232</capacity>
>   <allocation unit='bytes'>87902429184</allocation>
>   <available unit='bytes'>123341570048</available>
>   <source>
>   </source>
>   <target>
>     <path>/var/lib/libvirt/qemu/nvram</path>
>     <permissions>
>       <mode>0755</mode>
>       <owner>107</owner>
>       <group>107</group>
>       <label>system_u:object_r:qemu_var_run_t:s0</label>
>     </permissions>
>   </target>
> </pool>
> 
> # virsh vol-list --pool nvram
>  Name                 Path                                    
> -----------------------------------------------------------------------------
> -
>  rhel7.3-uefi-clone_VARS.fd
> /var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd
>  rhel7.3-uefi_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd
> 
> 5. Delete new cloned guest: rhel7.3-uefi-clone, then refresh pool "nvram".
> Result: 
> Guest deletes successfully, and volume
> "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi-clone_VARS.fd" deleted together.
> 
> 6. Start original guest "rhel7.3-uefi" again.
> Result: 
> Guest "rhel7.3-uefi" successfully.
> 
> 7. Stop original guest "rhel7.3-uefi", delete vram file
> "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd" manually, then start
> again.
>  # virsh vol-list --pool nvram
>  Name                 Path                                    
> -----------------------------------------------------------------------------
> -
> 
> Result: 
> I. Guest starts successfully.
> II. New file "/var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd" generated
> while guest starting.
> 
> # virsh vol-list --pool nvram
>  Name                 Path                                    
> -----------------------------------------------------------------------------
> -
>  rhel7.3-uefi_VARS.fd /var/lib/libvirt/qemu/nvram/rhel7.3-uefi_VARS.fd
> 
> 8. Stop original guest "rhel7.3-uefi", then stop&delete pool "nvram", clone
> again.
> 
> Result:
> I. A error window pops up:
> Error creating virtual machine clone 'rhel7.3-uefi-clone': Could not define
> storage pool: operation failed: Storage source conflict with pool: 'nvram'
> II. Then i checked "nvram" is existing even though virt-clone process hasn't
> start successfully.
> # virsh pool-list --all
>  Name                 State      Autostart 
> -------------------------------------------
>  boot-scratch         active     yes       
>  default              active     yes       
>  Downloads            active     yes       
>  nvram                active     yes       
>  root                 active     yes   
> 
> Hi Pavel, 
> Please help me have a look of above verifying steps, i have two questions
> want to confirm with you.   
> I. If nvram file can generated while starting guest automatically, it seems
> no meaning to generated a new file for the new cloned guest.

Yes, the nvram file can be generated if it doesn't exists, however it also stores some data used by UEFI and it is desirable to clone those data while cloning the whole guest.

> II. Which period pool "nvram" generates, if reported error in step8, is it
> correct to generate pool, thanks.

This is a different issue in virt-manager that it doesn't refresh storage pool list and virt-manager tries to create that storage pool twice with different name. A new bug should be created for this issue.

Comment 13 zhoujunqin 2017-03-23 03:26:31 UTC
> > Hi Pavel, 
> > Please help me have a look of above verifying steps, i have two questions
> > want to confirm with you.   
> > I. If nvram file can generated while starting guest automatically, it seems
> > no meaning to generated a new file for the new cloned guest.
> 
> Yes, the nvram file can be generated if it doesn't exists, however it also
> stores some data used by UEFI and it is desirable to clone those data while
> cloning the whole guest.

OKļ¼Œget it, so i think bug itself has been fixed.

> 
> > II. Which period pool "nvram" generates, if reported error in step8, is it
> > correct to generate pool, thanks.
> 
> This is a different issue in virt-manager that it doesn't refresh storage
> pool list and virt-manager tries to create that storage pool twice with
> different name. A new bug should be created for this issue.

File a new Bug 1435064 to track, thanks.

Based on above comments, i move this bug from ON_QA to VERIFIED.

Comment 14 errata-xmlrpc 2017-08-01 21:02:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2072