Created attachment 1295699 [details] vdsm.log Description of problem: Failed to import guest whose disk is not listed in storage pool from kvm/xen source at rhv4.1 Version-Release number of selected component (if applicable): rhv:4.1.3-0.1.el7 vdsm-4.19.21-1.el7ev.x86_64 How reproducible: 100% Steps to Reproduce: 1.Prepare a guest whose disk is not listed in storage pool # virsh dumpxml avocado-vt-vm1 .... <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/root/RHEL-7.3-x86_64-latest.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> ..... 2.Try to import this guest in rhv4.1 from KVM host Open virtual machine option at rhv4.1 -> click import button -> choose source as KVM ->input URL:qemu+tcp://ip/system, username/password->Load guests successfully-> select guest avocado-vt-vm1" to import 4.Failed to import the guest as screenshot and get error info from vdsm.log .... 2017-07-10 11:37:03,391+0800 ERROR (jsonrpc/7) [root] Error getting disk size (v2v:1089) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1078, in _get_disk_info vol = conn.storageVolLookupByPath(disk['alias']) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4555, in storageVolLookupByPath if ret is None:raise libvirtError('virStorageVolLookupByPath() failed', conn=self) libvirtError: Storage volume not found: no storage vol with matching path '/root/RHEL-7.3-x86_64-latest.qcow2' 2017-07-10 11:37:03,393+0800 WARN (jsonrpc/7) [root] Cannot add VM avocado-vt-vm1 due to disk storage error (v2v:1020) .... Actula results: As above description Expected results: Could import guest whose disk is not listed in storage pool from kvm/xen source at rhv4.1 successfully Additional info: 1.Try to convert this guest to rhv by virt-v2v on v2v conversion server,then could import the guest from export domain to data domain on rhv4.1 after finishing v2v conversion,so the problem is due to vdsm # virt-v2v avocado-vt-vm1 -o rhv -os 10.73.131.93:/home/nfs_export [ 0.0] Opening the source -i libvirt avocado-vt-vm1 [ 0.0] Creating an overlay to protect the source from being modified [ 0.4] Initializing the target -o rhv -os 10.73.131.93:/home/nfs_export [ 0.7] Opening the overlay [ 6.1] Inspecting the overlay [ 13.8] Checking for sufficient free disk space in the guest [ 13.8] Estimating space required on target for each disk [ 13.8] Converting Red Hat Enterprise Linux Server 7.3 (Maipo) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 52.2] Mapping filesystem data to avoid copying unused and blank areas [ 52.4] Closing the overlay [ 52.7] Checking if the guest needs BIOS or UEFI to boot [ 52.7] Assigning disks to buses [ 52.7] Copying disk 1/1 to /tmp/v2v.Zzc4KD/c9cfeba7-73f8-428a-aa77-9a2a1acf0063/images/c8eb039e-3007-4e08-9580-c49da8b73d55/f76d16ea-5e66-4987-a496-8f378b127986 (qcow2) (100.00/100%) [ 152.4] Creating output metadata [ 152.6] Finishing off
Created attachment 1295700 [details] screenshot
*** This bug has been marked as a duplicate of bug 1469077 ***
Reopening. This problem is different from bug 1469077. We sort of rely on the fact that disk image is in some storage pool and that is wrong. Libvirt does not mandate this and in fact using it without any storage pools defined at all is perfectly valid use case.
(In reply to mxie from comment #0) > Expected results: > Could import guest whose disk is not listed in storage pool from kvm/xen > source at rhv4.1 successfully Note there is no way how do that on Xen. We still produce an error if such VM is imported from Xen.
Verification build: ovirt-engine-4.2.0-0.0.master.20171029154613.git19686f3.el7.centos vdsm-4.20.4-12.git11d6d3d.el7.centos.x86_64 libvirt-client-3.2.0-14.el7_4.3.x86_64 qemu-kvm-common-ev-2.9.0-16.el7_4.8.1.x86_64 virt-v2v-1.36.3-6.el7_4.3.x86_64 Verification scenario: KVM: 1. Create KVM VM. 2. Create new KVM storage pool, but don't create it under /var/lib/libvirt/images 3. Copy VM image from step 1 to the folder created in step 2. 4. Run KVM VM and verify it's running properly. 5. Power off KVM VM and import VM. 6. Verify VM is imported successfully. Run VM and verify VM is running. Xen: 1. Repeat steps 1-6 from KVM scenario. 2. Try to import VM. 3. Import failed, observe vdsm.log and verify the next ERROR is logged: ERROR (jsonrpc/1) [root] Disk has to be in storage pool (v2v:1159)
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017. Since the problem described in this bug report should be resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.