Bug 1793263

Summary: Libvirtd will not pre-create images on the target host during migration
Product: Red Hat Enterprise Linux Advanced Virtualization Reporter: gaojianan <jgao>
Component: libvirtAssignee: Peter Krempa <pkrempa>
Status: CLOSED ERRATA QA Contact: gaojianan <jgao>
Severity: high Docs Contact:
Priority: unspecified    
Version: 8.2CC: dyuan, hhan, jdenemar, jgao, lmen, xuzhang, yafu
Target Milestone: rc   
Target Release: 8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-6.0.0-4.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-05 09:55:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Libvirtd log during migration none

Description gaojianan 2020-01-21 02:50:00 UTC
Created attachment 1654117 [details]
Libvirtd log during migration

Description of problem:
As subject

Version-Release number of selected component (if applicable):
libvirt-6.0.0-1.module+el8.2.0+5453+31b2b136.x86_64
qemu-kvm-4.2.0-6.module+el8.2.0+5453+31b2b136.x86_64

How reproducible:
100%

Steps to Reproduce:
1.The same pool in the both source and target host:
# virsh pool-list
 Name        State    Autostart
---------------------------------
 default     active   no

# virsh pool-dumpxml default 
<pool type='dir'>
  <name>default</name>
  <uuid>7d03d4d4-e5f6-4939-9dd2-7485f6615cb1</uuid>
  <capacity unit='bytes'>53660876800</capacity>
  <allocation unit='bytes'>29820514304</allocation>
  <available unit='bytes'>23840362496</available>
  <source>
  </source>
  <target>
    <path>/var/lib/libvirt/images</path>
    <permissions>
      <mode>0711</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:virt_image_t:s0</label>
    </permissions>
  </target>
</pool>

2.Show the image of the guest:
# virsh domblklist demo
 Target   Source
-----------------------------------------------------------------------------
 vdb      /var/lib/libvirt/images/RHEL-8.2.0-20191219.0-x86_64-ovmf.qcow2.2

3.Do storage migration:
# virsh migrate demo --live --verbose qemu+ssh://10.16.200.75/system  --copy-storage-all
root.200.75's password: 
error: Cannot access storage file '/var/lib/libvirt/images/RHEL-8.2.0-20191219.0-x86_64-ovmf.qcow2.2': No such file or directory

Actual results:
As step 3,migration failed.

Expected results:
Migrate successfully with pre-creating the image on target pool.

Additional info:
More libvirt log in attachment

Comment 1 Peter Krempa 2020-01-30 14:09:52 UTC
The issue is that when gathering data at the destination the migration code would still want to query the '-drive' frontend which no longer exists in blockdev configurations. We need to supply the disk capacity based on the nodename.

Comment 3 Peter Krempa 2020-02-04 14:18:00 UTC
Fixed upstream by following commits:

b9e87908db qemuMigrationCookieAddNBD: Fix filling of 'capacity' when blockdev is used
d409411213 qemuMigrationCookieAddNBD: Remove 'ret' variable and 'cleanup' label
45eefb2c78 qemuMigrationCookieAddNBD: Use virHashNew and automatic freeing of virHashTablePtr
464345e153 qemuMigrationCookieAddNBD: Move monitor call out of the loop
8efeeb59a6 qemuMigrationCookieAddNBD: Use glib memory allocators
3093822d1d qemuMigrationCookieNBD: Extract embedded struct
bdff9d4513 qemuMigrationCookieAddNBD: Exit early if there are no disks
6eab924daa Remove checking of return value of virHashNew
2a5ea0a0c1 conf: domain: Remove checking of return value of virHashCreateFull
50f7483a0d util: hash: Use g_new0 for allocating hash internals

Comment 6 gaojianan 2020-02-26 01:53:42 UTC
Verify step is same as:
https://bugzilla.redhat.com/show_bug.cgi?id=1790733#c8

Verified version:
libvirt-6.0.0-5.virtcov.el8.x86_64
qemu-kvm-4.2.0-10.module+el8.2.0+5740+c3dff59e.x86_64

Comment 8 errata-xmlrpc 2020-05-05 09:55:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017