Bug 1176598
Summary: | virt-v2v -o vdsm write ovf to specify domain | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Shahar Havivi <shavivi> | ||||
Component: | libguestfs | Assignee: | Shahar Havivi <shavivi> | ||||
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||
Severity: | unspecified | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 7.1 | CC: | ahadas, juzhou, mbooth, michal.skrivanek, mzhan, ptoscano, rjones, shavivi, sherold, tzheng, xiaodwan | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | V2V | ||||||
Fixed In Version: | libguestfs-1.28.1-1.24.el7 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1250014 (view as bug list) | Environment: | |||||
Last Closed: | 2015-11-19 06:58:58 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1154205, 1172230, 1205796, 1250014, 1250960 | ||||||
Attachments: |
|
Description
Shahar Havivi
2014-12-22 14:02:45 UTC
What does the data domain look like in that case? I looked at one on my machine and it used <domain-id>/master/vms/<vmid>. (In reply to Richard W.M. Jones from comment #1) > What does the data domain look like in that case? I looked at > one on my machine and it used <domain-id>/master/vms/<vmid>. it maybe an old domain, I think the new format not longer have the vms (it does have an empty vm directory for backward compatibility). This has been committed upstream as 14d11916faf68afb672d1626348931f2b90afd08. Try to verify this bug on new build: libguestfs-1.28.1-1.33.el7.x86_64 libvirt-1.2.15-2.el7.x86_64 virt-v2v-1.28.1-1.33.el7.x86_64 Steps: 1. Check whether option "--vdsm-ovf-output" has added: 1.1 Check virt-v2 manual page: # man virt-v2v ... --vdsm-ovf-output ... ยท the OVF output directory (default current directory) (--vdsm-ovf-output). 1.2 Check virt-v2v help info # virt-v2v --help --vdsm-ovf-output Output OVF file Result: New option has added. 2. Use virt-v2v convert a kvm guest to rhev. 2.1 Check rhev Data Center path and mount it to local: (I mount nfs path to local since bug 1176591, i'm not sure whether my steps is right or not, please also help me check, thanks) Path: 10.66.90.115:/vol/v2v_auto/nfs_export # mount 10.66.90.115:/vol/v2v_auto /mnt 2.2 Create 2 new path: images/12345678-1234-1234-1234-123456789001 master/vms/12345678-1234-1234-1234-123456789003 2.3 Run virt-v2v command: # virt-v2v -o vdsm -of raw --vdsm-image-uuid 12345678-1234-1234-1234-123456789001 --vdsm-vol-uuid 12345678-1234-1234-1234-123456789002 --vdsm-vm-uuid 12345678-1234-1234-1234-123456789003 --vdsm-ovf-output /mnt/nfs_export/8a94984a-a1e2-465d-83e7-a2f8165aaffe/master/vms/12345678-1234-1234-1234-123456789003 -os /mnt/nfs_export/8a94984a-a1e2-465d-83e7-a2f8165aaffe rhel6.6-juzhou-smartcard [ 0.0] Opening the source -i libvirt rhel6.6-juzhou-smartcard [ 0.0] Creating an overlay to protect the source from being modified [ 0.0] Opening the overlay [ 2.0] Initializing the target -o vdsm -os /mnt/nfs_export/8a94984a-a1e2-465d-83e7-a2f8165aaffe --vdsm-image-uuid 12345678-1234-1234-1234-123456789001 --vdsm-vol-uuid 12345678-1234-1234-1234-123456789002 --vdsm-vm-uuid 12345678-1234-1234-1234-123456789003 --vdsm-ovf-output /mnt/nfs_export/8a94984a-a1e2-465d-83e7-a2f8165aaffe/master/vms/12345678-1234-1234-1234-123456789003 [ 2.0] Inspecting the overlay [ 11.0] Checking for sufficient free disk space in the guest [ 11.0] Estimating space required on target for each disk [ 11.0] Converting Red Hat Enterprise Linux Server release 6.6 (Santiago) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 40.0] Mapping filesystem data to avoid copying unused and blank areas [ 41.0] Closing the overlay [ 41.0] Copying disk 1/1 to /mnt/nfs_export/8a94984a-a1e2-465d-83e7-a2f8165aaffe/images/12345678-1234-1234-1234-123456789001/12345678-1234-1234-1234-123456789002 (raw) (100.00/100%) [ 105.0] Creating output metadata [ 105.0] Finishing off Result: Conversion finished without no error. # ll /mnt/nfs_export/8a94984a-a1e2-465d-83e7-a2f8165aaffe/images/12345678-1234-1234-1234-123456789001/ total 3453956 -rw-r--r--. 1 nobody nobody 7516192768 May 12 10:41 12345678-1234-1234-1234-123456789002 -rw-r--r--. 1 nobody nobody 295 May 12 10:39 12345678-1234-1234-1234-123456789002.meta # ll /mnt/nfs_export/8a94984a-a1e2-465d-83e7-a2f8165aaffe/master/vms/12345678-1234-1234-1234-123456789003/ total 8 -rw-r--r--. 1 nobody nobody 4640 May 12 10:41 12345678-1234-1234-1234-123456789003.ovf 2.4 Login rhevm and try to import images. Result: Failed to import for error: Cannot import VM. VM'S Image does not exist. I will attach engine.log. So rjones, please help me have a look, thanks. Created attachment 1024512 [details]
engine.log
Your verification is correct, up to this point:
> 2.4 Login rhevm and try to import images.
>
> Result: Failed to import for error:
> Cannot import VM. VM'S Image does not exist.
> I will attach engine.log.
Normally end users would use '-o rhev' to import and image, which appears
in RHEV's Export Storage Domain (ESD). However this bug is about '-o vdsm'
which is a private mode used by VDSM to import directly into the RHEV
Data Domain (not going via ESD).
When VDSM uses '-o vdsm' it also does some updates to RHEV's database so
the imported image just appears.
It's not possible for us to emulate those database updates. Without them
the imported VM is not visible anywhere.
So your verification is correct, as far as it is possible for us to test
it, and I think you can mark the bug as VERIFIED.
Hi rjones, thanks for your quick reply. According to Comment 6 and Comment 8,move this bug from ON_QA to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2183.html |