Bug 1746699
Summary: | Can't import guest from export domain to data domain on rhv4.3 due to error "Invalid parameter: 'DiskType=1'" | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | mxie <mxie> | ||||||||
Component: | vdsm | Assignee: | shani <sleviim> | ||||||||
Status: | CLOSED ERRATA | QA Contact: | Nisim Simsolo <nsimsolo> | ||||||||
Severity: | urgent | Docs Contact: | |||||||||
Priority: | urgent | ||||||||||
Version: | 4.3.4 | CC: | aefrat, derez, emarcus, juzhou, lsurette, mtessun, mzhan, nsimsolo, nsoffer, paulds, pvilayat, rjones, sleviim, srevivo, tnisan, tzheng, xiaodwan, ycui, zili | ||||||||
Target Milestone: | ovirt-4.4.0 | Keywords: | Regression, ZStream | ||||||||
Target Release: | --- | ||||||||||
Hardware: | x86_64 | ||||||||||
OS: | Unspecified | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | rhv-4.4.0-29 | Doc Type: | Bug Fix | ||||||||
Doc Text: |
Before this update,copying disks created by virt-v2v failed with an Invalid Parameter Exception, Invalid parameter:'DiskType=1'.
With this release, copying disks succeeds.
|
Story Points: | --- | ||||||||
Clone Of: | |||||||||||
: | 1748395 1749234 1750719 (view as bug list) | Environment: | |||||||||
Last Closed: | 2020-08-04 13:27:17 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | 1798425 | ||||||||||
Bug Blocks: | 1748395, 1749234, 1750719 | ||||||||||
Attachments: |
|
Created attachment 1609276 [details]
engine.log
Created attachment 1609278 [details]
DiskType=1.meta
(In reply to mxie from comment #0) ... > 1.Convert a guests from VMware to RHV's export domain by virt-v2v > # virt-v2v -i ova esx6_7-rhel7.7-x86_64 -o rhv -os OK, the bug seems to be in virt-v2 then. Looking in v2v/create_ovf.ml 497 let buf = Buffer.create 256 in 498 let bpf fs = bprintf buf fs in 499 bpf "DOMAIN=%s\n" sd_uuid; (* "Domain" as in Storage Domain *) 500 bpf "VOLTYPE=LEAF\n"; 501 bpf "CTIME=%.0f\n" time; 502 bpf "MTIME=%.0f\n" time; 503 bpf "IMAGE=%s\n" image_uuid; 504 bpf "DISKTYPE=1\n"; 505 bpf "PUUID=00000000-0000-0000-0000-000000000000\n"; 506 bpf "LEGALITY=LEGAL\n"; 507 bpf "POOL_UUID=\n"; 508 bpf "SIZE=%Ld\n" size_in_sectors; 509 bpf "FORMAT=%s\n" format_for_rhv; 510 bpf "TYPE=%s\n" output_alloc_for_rhv; 511 bpf "DESCRIPTION=%s\n" (String.replace generated_by "=" "_"); 512 bpf "EOF\n"; Looks like the .meta file is generated by virt-v2v. This usage is not supported by RHV since the .meta files are not part of RHV API. This means also that rhv output does not support block storage (no .meta file) and will create incorrect .meta files when using storage format v5. I think this bug should move to virt-v2v. RHV should not support corrupted meta data created by external tools bypassing RHV API. Richard, what do you think? This is the old -o rhv mode which doesn't do via the RHV API at all. It's also a deprecated mode in virt-v2v. And AIUI the Export Storage Domain which it uses is also deprecated in RHV. As for why this error has suddenly appeared, I'm not sure why but it has to be because of some change in RHV to do with handling of ESDs. Of historical note, the DISKTYPE=1 was copied from the old Perl virt-v2v. I've no idea what that did since I didn't write it. That git repo is not actually online any longer but the code was: lib/Sys/VirtConvert/Connection/RHEVTarget.pm: print $meta "DISKTYPE=1\n"; Removing Keywords: Regression or TestBlocker since this cause bugzilla scripts to spam the bug whenever the bug is edited, and this is not helpful. (In reply to Richard W.M. Jones from comment #23) > This is the old -o rhv mode which doesn't do via the RHV API at all. It's > also > a deprecated mode in virt-v2v. And AIUI the Export Storage Domain which it > uses > is also deprecated in RHV. I guess there is no point in fixing this code to use the correct value at this point. > As for why this error has suddenly appeared, I'm not sure why but it has > to be because of some change in RHV to do with handling of ESDs. The error was exposed in 4.3 since we started to validate the disk type when creating new volumes. Older versions of vdsm were writing the value as is to storage without any validation. Since we have corrupted metadata files in existing export domains, I think we can workaround this issue by accepting also DISKTYPE=1. Tal, this can be fixed with a trivial patch, targeting to 4.3.6. (In reply to Nir Soffer from comment #30) > Since we have corrupted metadata files in existing export domains, I think > we can workaround this issue by accepting also DISKTYPE=1. I should say that the way -o rhv works is it copies the disks to the ESD, and then you're supposed to soon afterwards import them into RHV. (This of course long predates RHV even having an API). So the disks shouldn't exist in the ESD for very long. It may therefore not be necessary to work around this in RHV. My question is what should the DISKTYPE field actually contain? Maybe we can put the proper data into the .meta file or remove this field entirely? (In reply to Richard W.M. Jones from comment #32) > (In reply to Nir Soffer from comment #30) > > Since we have corrupted metadata files in existing export domains, I think > > we can workaround this issue by accepting also DISKTYPE=1. > > I should say that the way -o rhv works is it copies the disks to > the ESD, and then you're supposed to soon afterwards import them > into RHV. (This of course long predates RHV even having an API). > > So the disks shouldn't exist in the ESD for very long. It may > therefore not be necessary to work around this in RHV. It depends on engine, if it deletes the exported vm right after the import, but based on reports from other users I suspect that the vms are not deleted. > My question is what should the DISKTYPE field actually contain? Maybe > we can put the proper data into the .meta file or remove this field > entirely? The correct value is "DISKTYPE=2", so this should fix the issue: diff --git a/v2v/create_ovf.ml b/v2v/create_ovf.ml index 91ff5198d..9aad5dd15 100644 --- a/v2v/create_ovf.ml +++ b/v2v/create_ovf.ml @@ -501,7 +501,7 @@ let create_meta_files output_alloc sd_uuid image_uuids overlays = bpf "CTIME=%.0f\n" time; bpf "MTIME=%.0f\n" time; bpf "IMAGE=%s\n" image_uuid; - bpf "DISKTYPE=1\n"; + bpf "DISKTYPE=2\n"; bpf "PUUID=00000000-0000-0000-0000-000000000000\n"; bpf "LEGALITY=LEGAL\n"; bpf "POOL_UUID=\n"; But it will not help with existing images, or with engine database containing the invalid value "1" for imported disks. Thanks. Whether or not we also need a fix in RHV, this is now fixed in virt-v2v in commit fcfdbc9420b07e3003df38481afb9ccd22045e1a (virt-v2v >= 1.41.5). Ming, can you verify the fix for this bug? WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (ON_QA) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops Verified: ovirt-engine-4.4.0-0.29.master.el8ev.noarch virt-v2v-1.40.2-22.module+el8.2.0+6029+618ef2ec.x86_64 qemu-kvm-4.2.0-17.module+el8.2.0+6131+4e715f3b.x86_64 libvirt-daemon-6.0.0-16.module+el8.2.0+6131+4e715f3b.x86_64 vdsm-4.40.9-1.el8ev.x86_64 Verification scenario: 1. Convert a guests from VMware to RHV's export domain by virt-v2v 2. Import guest from export domain to data domain 3. Run VM Expected results: 1. Import succeeds 2. Import succeeds, no errors observed in vdsm.log 3. VM is up. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV RHEL Host (ovirt-host) 4.4), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3246 |
Created attachment 1609275 [details] vdsm.log Description of problem: Can't import guest from export domain to data domain on rhv4.3 due to error "Invalid parameter: 'DiskType=1'" Version-Release number of selected component (if applicable): vdsm-4.30.27-1.el7ev.x86_64 RHV:4.3.4.3-0.1.el7 Steps to Reproduce: 1.Convert a guests from VMware to RHV's export domain by virt-v2v # virt-v2v -i ova esx6_7-rhel7.7-x86_64 -o rhv -os 10.73.224.199:/home/p2v_export -of qcow2 -b ovirtmgmt [ 0.0] Opening the source -i ova esx6_7-rhel7.7-x86_64 [ 8.7] Creating an overlay to protect the source from being modified [ 8.9] Opening the overlay [ 13.2] Inspecting the overlay [ 37.9] Checking for sufficient free disk space in the guest [ 37.9] Estimating space required on target for each disk [ 37.9] Converting Red Hat Enterprise Linux Server 7.7 Beta (Maipo) to run on KVM virt-v2v: warning: guest tools directory ‘linux/el7’ is missing from the virtio-win directory or ISO. Guest tools are only provided in the RHV Guest Tools ISO, so this can happen if you are using the version of virtio-win which contains just the virtio drivers. In this case only virtio drivers can be installed in the guest, and installation of Guest Tools will be skipped. virt-v2v: This guest has virtio drivers installed. [ 184.2] Mapping filesystem data to avoid copying unused and blank areas [ 184.9] Closing the overlay [ 185.0] Assigning disks to buses [ 185.0] Checking if the guest needs BIOS or UEFI to boot [ 185.0] Initializing the target -o rhv -os 10.73.224.199:/home/p2v_export [ 185.4] Copying disk 1/2 to /tmp/v2v.43WPcK/e7cd32d9-6b7d-4be9-ad0f-3fb7cfeeea3b/images/c2b64a63-85ca-402f-a775-391849776152/4344f61d-5a07-45ec-a3c6-e0b5041f9b8e (qcow2) (100.00/100%) [ 438.6] Copying disk 2/2 to /tmp/v2v.43WPcK/e7cd32d9-6b7d-4be9-ad0f-3fb7cfeeea3b/images/0569bfe8-3857-4997-9c06-93248e809ab3/e8f2ad4d-adc4-4d10-bd46-92cd545e1b12 (qcow2) (100.00/100%) [ 439.4] Creating output metadata [ 439.5] Finishing off 2.Try to import the guest from export domain to data domain, but failed to import guest with below error VDSM p2v command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: (u"Destination volume 4344f61d-5a07-45ec-a3c6-e0b5041f9b8e error: Invalid parameter: 'DiskType=1'",) Additional info: Can't reproduce the bug with vdsm-4.30.12-1.el7ev.x86_64