Bug 1746699 - Can't import guest from export domain to data domain on rhv4.3 due to error "Invalid parameter: 'DiskType=1'"
Summary: Can't import guest from export domain to data domain on rhv4.3 due to error "...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.3.4
Hardware: x86_64
OS: Unspecified
urgent
urgent
Target Milestone: ovirt-4.4.0
: ---
Assignee: shani
QA Contact: Nisim Simsolo
URL:
Whiteboard:
Depends On: 1798425
Blocks: 1748395 1749234 1750719
TreeView+ depends on / blocked
 
Reported: 2019-08-29 06:52 UTC by mxie@redhat.com
Modified: 2020-08-04 13:27 UTC (History)
19 users (show)

Fixed In Version: rhv-4.4.0-29
Doc Type: Bug Fix
Doc Text:
Before this update,copying disks created by virt-v2v failed with an Invalid Parameter Exception, Invalid parameter:'DiskType=1'. With this release, copying disks succeeds.
Clone Of:
: 1748395 1749234 1750719 (view as bug list)
Environment:
Last Closed: 2020-08-04 13:27:17 UTC
oVirt Team: Virt
Target Upstream Version:


Attachments (Terms of Use)
vdsm.log (133.12 KB, text/plain)
2019-08-29 06:52 UTC, mxie@redhat.com
no flags Details
engine.log (39.42 KB, text/plain)
2019-08-29 06:53 UTC, mxie@redhat.com
no flags Details
DiskType=1.meta (349 bytes, text/x-mpsub)
2019-08-29 06:59 UTC, mxie@redhat.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github libguestfs/libguestfs/commit/fcfdbc9420b07e3003df38481afb9ccd22045e1a 0 None None None 2020-09-17 13:25:06 UTC
Red Hat Bugzilla 1748395 0 urgent CLOSED [downstream clone - 4.3.6] Can't import guest from export domain to data domain on rhv4.3 due to error "Invalid paramete... 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2020:3246 0 None None None 2020-08-04 13:27:55 UTC
oVirt gerrit 103072 0 'None' MERGED storage.constants: Support v2v data disk type 2021-01-19 13:43:10 UTC

Description mxie@redhat.com 2019-08-29 06:52:45 UTC
Created attachment 1609275 [details]
vdsm.log

Description of problem:
Can't import guest from export domain to data domain on rhv4.3 due to error "Invalid parameter: 'DiskType=1'"

Version-Release number of selected component (if applicable):
vdsm-4.30.27-1.el7ev.x86_64
RHV:4.3.4.3-0.1.el7

Steps to Reproduce:
1.Convert a guests from VMware to RHV's export domain by virt-v2v
# virt-v2v -i ova esx6_7-rhel7.7-x86_64 -o rhv -os 10.73.224.199:/home/p2v_export -of qcow2 -b ovirtmgmt
[   0.0] Opening the source -i ova esx6_7-rhel7.7-x86_64
[   8.7] Creating an overlay to protect the source from being modified
[   8.9] Opening the overlay
[  13.2] Inspecting the overlay
[  37.9] Checking for sufficient free disk space in the guest
[  37.9] Estimating space required on target for each disk
[  37.9] Converting Red Hat Enterprise Linux Server 7.7 Beta (Maipo) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el7’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 184.2] Mapping filesystem data to avoid copying unused and blank areas
[ 184.9] Closing the overlay
[ 185.0] Assigning disks to buses
[ 185.0] Checking if the guest needs BIOS or UEFI to boot
[ 185.0] Initializing the target -o rhv -os 10.73.224.199:/home/p2v_export
[ 185.4] Copying disk 1/2 to /tmp/v2v.43WPcK/e7cd32d9-6b7d-4be9-ad0f-3fb7cfeeea3b/images/c2b64a63-85ca-402f-a775-391849776152/4344f61d-5a07-45ec-a3c6-e0b5041f9b8e (qcow2)
    (100.00/100%)
[ 438.6] Copying disk 2/2 to /tmp/v2v.43WPcK/e7cd32d9-6b7d-4be9-ad0f-3fb7cfeeea3b/images/0569bfe8-3857-4997-9c06-93248e809ab3/e8f2ad4d-adc4-4d10-bd46-92cd545e1b12 (qcow2)
    (100.00/100%)
[ 439.4] Creating output metadata
[ 439.5] Finishing off


2.Try to import the guest from export domain to data domain, but failed to import guest with below error

VDSM p2v command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: (u"Destination volume 4344f61d-5a07-45ec-a3c6-e0b5041f9b8e error: Invalid parameter: 'DiskType=1'",)

Additional info:
Can't reproduce the bug with vdsm-4.30.12-1.el7ev.x86_64

Comment 3 mxie@redhat.com 2019-08-29 06:53:13 UTC
Created attachment 1609276 [details]
engine.log

Comment 4 mxie@redhat.com 2019-08-29 06:59:04 UTC
Created attachment 1609278 [details]
DiskType=1.meta

Comment 20 Nir Soffer 2019-09-02 14:34:31 UTC
(In reply to mxie@redhat.com from comment #0)
...
> 1.Convert a guests from VMware to RHV's export domain by virt-v2v
> # virt-v2v -i ova esx6_7-rhel7.7-x86_64 -o rhv -os

OK, the bug seems to be in virt-v2 then.

Looking in v2v/create_ovf.ml

 497       let buf = Buffer.create 256 in
 498       let bpf fs = bprintf buf fs in
 499       bpf "DOMAIN=%s\n" sd_uuid; (* "Domain" as in Storage Domain *)
 500       bpf "VOLTYPE=LEAF\n";
 501       bpf "CTIME=%.0f\n" time;
 502       bpf "MTIME=%.0f\n" time;
 503       bpf "IMAGE=%s\n" image_uuid;
 504       bpf "DISKTYPE=1\n";
 505       bpf "PUUID=00000000-0000-0000-0000-000000000000\n";
 506       bpf "LEGALITY=LEGAL\n";
 507       bpf "POOL_UUID=\n";
 508       bpf "SIZE=%Ld\n" size_in_sectors;
 509       bpf "FORMAT=%s\n" format_for_rhv;
 510       bpf "TYPE=%s\n" output_alloc_for_rhv;
 511       bpf "DESCRIPTION=%s\n" (String.replace generated_by "=" "_");
 512       bpf "EOF\n";

Looks like the .meta file is generated by virt-v2v. This usage is not supported
by RHV since the .meta files are not part of RHV API.

This means also that rhv output does not support block storage (no .meta file)
and will create incorrect .meta files when using storage format v5.

I think this bug should move to virt-v2v. RHV should not support corrupted meta
data created by external tools bypassing RHV API.

Richard, what do you think?

Comment 23 Richard W.M. Jones 2019-09-02 15:49:08 UTC
This is the old -o rhv mode which doesn't do via the RHV API at all.  It's also
a deprecated mode in virt-v2v.  And AIUI the Export Storage Domain which it uses
is also deprecated in RHV.

As for why this error has suddenly appeared, I'm not sure why but it has
to be because of some change in RHV to do with handling of ESDs.

Comment 26 Richard W.M. Jones 2019-09-02 15:52:01 UTC
Of historical note, the DISKTYPE=1 was copied from the old Perl virt-v2v.
I've no idea what that did since I didn't write it.

That git repo is not actually online any longer but the code was:

lib/Sys/VirtConvert/Connection/RHEVTarget.pm:    print $meta "DISKTYPE=1\n";

Comment 29 Nir Soffer 2019-09-02 15:59:25 UTC
Removing Keywords: Regression or TestBlocker since this cause bugzilla scripts
to spam the bug whenever the bug is edited, and this is not helpful.

Comment 30 Nir Soffer 2019-09-02 16:09:47 UTC
(In reply to Richard W.M. Jones from comment #23)
> This is the old -o rhv mode which doesn't do via the RHV API at all.  It's
> also
> a deprecated mode in virt-v2v.  And AIUI the Export Storage Domain which it
> uses
> is also deprecated in RHV.

I guess there is no point in fixing this code to use the correct value at
this point.

> As for why this error has suddenly appeared, I'm not sure why but it has
> to be because of some change in RHV to do with handling of ESDs.

The error was exposed in 4.3 since we started to validate the disk type 
when creating new volumes. Older versions of vdsm were writing the value
as is to storage without any validation.

Since we have corrupted metadata files in existing export domains, I think
we can workaround this issue by accepting also DISKTYPE=1.

Comment 31 Nir Soffer 2019-09-02 16:13:03 UTC
Tal, this can be fixed with a trivial patch, targeting to 4.3.6.

Comment 32 Richard W.M. Jones 2019-09-02 16:18:29 UTC
(In reply to Nir Soffer from comment #30)
> Since we have corrupted metadata files in existing export domains, I think
> we can workaround this issue by accepting also DISKTYPE=1.

I should say that the way -o rhv works is it copies the disks to
the ESD, and then you're supposed to soon afterwards import them
into RHV.  (This of course long predates RHV even having an API).

So the disks shouldn't exist in the ESD for very long.  It may
therefore not be necessary to work around this in RHV.

My question is what should the DISKTYPE field actually contain?  Maybe
we can put the proper data into the .meta file or remove this field
entirely?

Comment 33 Nir Soffer 2019-09-02 16:38:18 UTC
(In reply to Richard W.M. Jones from comment #32)
> (In reply to Nir Soffer from comment #30)
> > Since we have corrupted metadata files in existing export domains, I think
> > we can workaround this issue by accepting also DISKTYPE=1.
> 
> I should say that the way -o rhv works is it copies the disks to
> the ESD, and then you're supposed to soon afterwards import them
> into RHV.  (This of course long predates RHV even having an API).
> 
> So the disks shouldn't exist in the ESD for very long.  It may
> therefore not be necessary to work around this in RHV.

It depends on engine, if it deletes the exported vm right after the import,
but based on reports from other users I suspect that the vms are not deleted.
 
> My question is what should the DISKTYPE field actually contain?  Maybe
> we can put the proper data into the .meta file or remove this field
> entirely?

The correct value is "DISKTYPE=2", so this should fix the issue:

diff --git a/v2v/create_ovf.ml b/v2v/create_ovf.ml
index 91ff5198d..9aad5dd15 100644
--- a/v2v/create_ovf.ml
+++ b/v2v/create_ovf.ml
@@ -501,7 +501,7 @@ let create_meta_files output_alloc sd_uuid image_uuids overlays =
       bpf "CTIME=%.0f\n" time;
       bpf "MTIME=%.0f\n" time;
       bpf "IMAGE=%s\n" image_uuid;
-      bpf "DISKTYPE=1\n";
+      bpf "DISKTYPE=2\n";
       bpf "PUUID=00000000-0000-0000-0000-000000000000\n";
       bpf "LEGALITY=LEGAL\n";
       bpf "POOL_UUID=\n";

But it will not help with existing images, or with engine database containing
the invalid value "1" for imported disks.

Comment 34 Richard W.M. Jones 2019-09-02 20:22:08 UTC
Thanks.  Whether or not we also need a fix in RHV, this is now fixed in
virt-v2v in commit fcfdbc9420b07e3003df38481afb9ccd22045e1a (virt-v2v >= 1.41.5).

Comment 37 Nir Soffer 2019-09-03 16:29:11 UTC
Ming, can you verify the fix for this bug?

Comment 41 RHV bug bot 2019-10-22 17:25:51 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 42 RHV bug bot 2019-10-22 17:39:14 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 43 RHV bug bot 2019-10-22 17:46:28 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 44 RHV bug bot 2019-10-22 18:02:17 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 45 RHV bug bot 2019-11-19 11:53:12 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 46 RHV bug bot 2019-11-19 12:03:10 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 47 RHV bug bot 2019-12-13 13:16:58 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 48 RHV bug bot 2019-12-20 17:46:03 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 49 RHV bug bot 2020-01-08 14:50:20 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 50 RHV bug bot 2020-01-08 15:19:20 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 51 RHV bug bot 2020-01-24 19:52:03 UTC
WARN: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 53 Nisim Simsolo 2020-04-07 13:15:49 UTC
Verified: 
ovirt-engine-4.4.0-0.29.master.el8ev.noarch
virt-v2v-1.40.2-22.module+el8.2.0+6029+618ef2ec.x86_64
qemu-kvm-4.2.0-17.module+el8.2.0+6131+4e715f3b.x86_64
libvirt-daemon-6.0.0-16.module+el8.2.0+6131+4e715f3b.x86_64
vdsm-4.40.9-1.el8ev.x86_64

Verification scenario:
1. Convert a guests from VMware to RHV's export domain by virt-v2v
2. Import guest from export domain to data domain
3. Run VM

Expected results:
1. Import succeeds
2. Import succeeds, no errors observed in vdsm.log
3. VM is up.

Comment 56 errata-xmlrpc 2020-08-04 13:27:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHV RHEL Host (ovirt-host) 4.4), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3246


Note You need to log in before you can comment on or make changes to this bug.