| Summary: | Importing VMs from VMware ova file fails with block storage domain and thin provisioned disk | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | nijin ashok <nashok> | ||||
| Component: | vdsm | Assignee: | Shahar Havivi <shavivi> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Nisim Simsolo <nsimsolo> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | urgent | ||||||
| Version: | 3.6.9 | CC: | bazulay, gchakkar, gklein, lsurette, mavital, mgoldboi, michal.skrivanek, nsimsolo, shavivi, srevivo, tjelinek, trichard, ycui, ykaul | ||||
| Target Milestone: | ovirt-4.1.0-beta | Keywords: | ZStream | ||||
| Target Release: | --- | ||||||
| Hardware: | All | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: |
Previously, Red Hat Virtualization could not import an OVA from VMWare when the destination storage domain used block storage. Allocating a smaller size for the VM disk by reading the 'size' attribute of the OVF XML caused the import to fail with no good reason. Now, the import operation uses the 'physical size' attribute for disk allocation, so an OVA can be imported to a block storage domain.
|
Story Points: | --- | ||||
| Clone Of: | |||||||
| : | 1409478 (view as bug list) | Environment: | |||||
| Last Closed: | 2017-04-25 00:53:06 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Bug Depends On: | 1332019 | ||||||
| Bug Blocks: | 1409478 | ||||||
| Attachments: |
|
||||||
Created attachment 1207972 [details]
logs
This is problematic. As you have figured out yourself, 'disk size' reported by qemu-img is not too informative. The size reported for VMDK format will tell you nothing. You cannot anticipate how much the resulting QCOW2 image will have; it can be few hundreds more or it can be twice the size of VMDK depending on the disk content. In other words, we won't know the resulting size of the QCOW2 image until we actually try to do the conversion. This is blocked by bug 1332019 Currently the planned release where we can consume that fix is 4.2 Please prioritize the dependent bug if you request a resolution sooner than that Tomas maybe you can use virDomainBlockInfo to get the physical size of the file on disk? If we can relay on that we can close bug 1332019 I've just checked and virDomainGetBlockInfo is not implemented for the ESX driver. Verification builds: rhevm-4.1.0.2-0.2.el7 libvirt-client-2.0.0-10.el7_3.4.x86_64 vdsm-4.19.2-2.el7ev.x86_64 qemu-kvm-rhev-2.6.0-28.el7_3.3.x86_64 sanlock-3.4.0-1.el7.x86_64 virt-v2v-1.32.7-3.el7_3.2.x86_64 What was the outcome of this bug? Please provide doc text in the format of: Cause: Consequence: Fix: Result: |
Description of problem: Directly importing VMs from VMware OVA is not working if the storage domain is block type and selected disk type is thin provisioned. It's failing with "No space left on device" during qemu-img convert. We are creating lv with size equal to "disk size" of the vmdk file. However the size during the conversion process is larger than this. The import works good if we select preallocated disk for target disk (As we create lv size equal to virtual size) and with the file based storage domain (As there are no lvs linked to the target image). The virtual size for the vmdk file is 7.0G and disk size is 312 MB. qemu-img info koutuk-disk1.vmdk image: koutuk-disk1.vmdk file format: vmdk virtual size: 7.0G (7516192768 bytes) disk size: 312M The process for v2v is /usr/bin/virt-v2v -i ova /tmp/test.ova -o vdsm -of qcow2 -oa sparse --vdsm-vm-uuid 7b504c6c-f0f7-44cf-8d8e-f6400ab309eb --vdsm-ovf-output /var/run/vdsm/v2v --machine-readable -os /rhev/data-center/00000001-0001-0001-0001-0000000002a6/6f6248a6-5f75-449d-8387-424368289b5f --vdsm-image-uuid c1afe108-f17a-460d-bdc4-60d8fb34dc8a --vdsm-vol-uuid fbb84abc-45eb-4c53-937c-c190882c7be7 The created lv lvs|grep "fbb84abc-45eb-4c53-937c-c190882c7be7" WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it! fbb84abc-45eb-4c53-937c-c190882c7be7 6f6248a6-5f75-449d-8387-424368289b5f -wi-a----- 384.00m From the strace . 12:51:11.417841 write(2, "qemu-img:", 9) = 9 <0.000026> 12:51:11.417887 write(2, " ", 1) = 1 <0.000007> 12:51:11.417914 write(2, "error while writing sector 2076672: No space left on device", 59) = 59 <0.000007> 12:51:11.501417 write(2, "virt-v2v: error: qemu-img command failed, see earlier errors\n", 61) = 61 <0.000136> If I manually run the same v2v command , it's successful as it will create a new image without linking to the lv device. [ 21.0] Initializing the target -o vdsm -os /rhev/data-center/00000001-0001-0001-0001-0000000002a6/6f6248a6-5f75-449d-8387-424368289b5f --vdsm-image-uuid c1afe108-f17a-460d-bdc4-60d8fb34dc8a --vdsm-vol-uuid fbb84abc-45eb-4c53-937c-c190882c7be7 --vdsm-vm-uuid 7b504c6c-f0f7-44cf-8d8e-f6400ab309eb --vdsm-ovf-output /var/run/vdsm/v2v [ 90.0] Copying disk 1/1 to /rhev/data-center/00000001-0001-0001-0001-0000000002a6/6f6248a6-5f75-449d-8387-424368289b5f/images/c1afe108-f17a-460d-bdc4-60d8fb34dc8a/fbb84abc-45eb-4c53-937c-c190882c7be7 (qcow2) (100.00/100%) [ 99.0] Creating output metadata [ 99.0] Finishing off The disk size created while manually running was 775M qemu-img info /rhev/data-center/00000001-0001-0001-0001-0000000002a6/6f6248a6-5f75-449d-8387-424368289b5f/images/c1afe108-f17a-460d-bdc4-60d8fb34dc8a/fbb84abc-45eb-4c53-937c-c190882c7be7 image: /rhev/data-center/00000001-0001-0001-0001-0000000002a6/6f6248a6-5f75-449d-8387-424368289b5f/images/c1afe108-f17a-460d-bdc4-60d8fb34dc8a/fbb84abc-45eb-4c53-937c-c190882c7be7 file format: qcow2 virtual size: 7.0G (7516192768 bytes) disk size: 775M We are incorrectly calculating the target image size while creating the target lv which is creating the issue. I was not able to test this in RHV 4 because of this Bug 1378045. Version-Release number of selected component (if applicable): RHEV 3.6.9 vdsm-4.17.35-1.el7ev.noarch How reproducible: 100% Steps to Reproduce: 1. Import a VMware OVA disk from RHEV GUI and select "thin provision" for target disk 2. Import fails with error Thread-24797::ERROR::2016-10-06 12:44:27,420::v2v::420::root::(_run) Job u'd10f14c5-1be8-4119-a27e-aea2683b115e' failed Traceback (most recent call last): File "/usr/share/vdsm/v2v.py", line 415, in _run self._import() File "/usr/share/vdsm/v2v.py", line 444, in _import self._watch_process_output() File "/usr/share/vdsm/v2v.py", line 476, in _watch_process_output for event in parser.parse(self._proc.stdout): File "/usr/share/vdsm/v2v.py", line 613, in parse for chunk in self._iter_progress(stream): File "/usr/share/vdsm/v2v.py", line 634, in _iter_progress raise OutputParserError('copy-disk stream closed unexpectedly') Actual results: Import VMware OVA to RHEV is not working. Expected results: Importing should work. Additional info: