RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1750719 - Can't import guest from export domain to data domain on rhv4.3 due to error "Invalid parameter: 'DiskType=1'"
Summary: Can't import guest from export domain to data domain on rhv4.3 due to error "...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.6
Hardware: x86_64
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1746699
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-10 10:43 UTC by Pino Toscano
Modified: 2020-03-31 19:55 UTC (History)
19 users (show)

Fixed In Version: libguestfs-1.40.2-7.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1746699
Environment:
Last Closed: 2020-03-31 19:55:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
import successfully (17.32 KB, image/png)
2019-09-11 02:46 UTC, liuzi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:1082 0 None None None 2020-03-31 19:55:41 UTC

Description Pino Toscano 2019-09-10 10:43:53 UTC
+++ This bug was initially created as a clone of Bug #1746699 +++

Description of problem:
Can't import guest from export domain to data domain on rhv4.3 due to error "Invalid parameter: 'DiskType=1'"

Version-Release number of selected component (if applicable):
vdsm-4.30.27-1.el7ev.x86_64
RHV:4.3.4.3-0.1.el7

Steps to Reproduce:
1.Convert a guests from VMware to RHV's export domain by virt-v2v
# virt-v2v -i ova esx6_7-rhel7.7-x86_64 -o rhv -os 10.73.224.199:/home/p2v_export -of qcow2 -b ovirtmgmt
[   0.0] Opening the source -i ova esx6_7-rhel7.7-x86_64
[   8.7] Creating an overlay to protect the source from being modified
[   8.9] Opening the overlay
[  13.2] Inspecting the overlay
[  37.9] Checking for sufficient free disk space in the guest
[  37.9] Estimating space required on target for each disk
[  37.9] Converting Red Hat Enterprise Linux Server 7.7 Beta (Maipo) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el7’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 184.2] Mapping filesystem data to avoid copying unused and blank areas
[ 184.9] Closing the overlay
[ 185.0] Assigning disks to buses
[ 185.0] Checking if the guest needs BIOS or UEFI to boot
[ 185.0] Initializing the target -o rhv -os 10.73.224.199:/home/p2v_export
[ 185.4] Copying disk 1/2 to /tmp/v2v.43WPcK/e7cd32d9-6b7d-4be9-ad0f-3fb7cfeeea3b/images/c2b64a63-85ca-402f-a775-391849776152/4344f61d-5a07-45ec-a3c6-e0b5041f9b8e (qcow2)
    (100.00/100%)
[ 438.6] Copying disk 2/2 to /tmp/v2v.43WPcK/e7cd32d9-6b7d-4be9-ad0f-3fb7cfeeea3b/images/0569bfe8-3857-4997-9c06-93248e809ab3/e8f2ad4d-adc4-4d10-bd46-92cd545e1b12 (qcow2)
    (100.00/100%)
[ 439.4] Creating output metadata
[ 439.5] Finishing off


2.Try to import the guest from export domain to data domain, but failed to import guest with below error

VDSM p2v command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: (u"Destination volume 4344f61d-5a07-45ec-a3c6-e0b5041f9b8e error: Invalid parameter: 'DiskType=1'",)

Additional info:
Can't reproduce the bug with vdsm-4.30.12-1.el7ev.x86_64

--- Additional comment from Nir Soffer on 2019-09-02 16:34:31 CEST ---

(In reply to mxie from comment #0)
...
> 1.Convert a guests from VMware to RHV's export domain by virt-v2v
> # virt-v2v -i ova esx6_7-rhel7.7-x86_64 -o rhv -os

OK, the bug seems to be in virt-v2 then.

Looking in v2v/create_ovf.ml

 497       let buf = Buffer.create 256 in
 498       let bpf fs = bprintf buf fs in
 499       bpf "DOMAIN=%s\n" sd_uuid; (* "Domain" as in Storage Domain *)
 500       bpf "VOLTYPE=LEAF\n";
 501       bpf "CTIME=%.0f\n" time;
 502       bpf "MTIME=%.0f\n" time;
 503       bpf "IMAGE=%s\n" image_uuid;
 504       bpf "DISKTYPE=1\n";
 505       bpf "PUUID=00000000-0000-0000-0000-000000000000\n";
 506       bpf "LEGALITY=LEGAL\n";
 507       bpf "POOL_UUID=\n";
 508       bpf "SIZE=%Ld\n" size_in_sectors;
 509       bpf "FORMAT=%s\n" format_for_rhv;
 510       bpf "TYPE=%s\n" output_alloc_for_rhv;
 511       bpf "DESCRIPTION=%s\n" (String.replace generated_by "=" "_");
 512       bpf "EOF\n";

Looks like the .meta file is generated by virt-v2v. This usage is not supported
by RHV since the .meta files are not part of RHV API.

This means also that rhv output does not support block storage (no .meta file)
and will create incorrect .meta files when using storage format v5.

I think this bug should move to virt-v2v. RHV should not support corrupted meta
data created by external tools bypassing RHV API.

Richard, what do you think?

--- Additional comment from Richard W.M. Jones on 2019-09-02 17:49:08 CEST ---

This is the old -o rhv mode which doesn't do via the RHV API at all.  It's also
a deprecated mode in virt-v2v.  And AIUI the Export Storage Domain which it uses
is also deprecated in RHV.

As for why this error has suddenly appeared, I'm not sure why but it has
to be because of some change in RHV to do with handling of ESDs.

--- Additional comment from Richard W.M. Jones on 2019-09-02 17:52:01 CEST ---

Of historical note, the DISKTYPE=1 was copied from the old Perl virt-v2v.
I've no idea what that did since I didn't write it.

That git repo is not actually online any longer but the code was:

lib/Sys/VirtConvert/Connection/RHEVTarget.pm:    print $meta "DISKTYPE=1\n";

--- Additional comment from Nir Soffer on 2019-09-02 18:09:47 CEST ---

(In reply to Richard W.M. Jones from comment #23)
> This is the old -o rhv mode which doesn't do via the RHV API at all.  It's
> also
> a deprecated mode in virt-v2v.  And AIUI the Export Storage Domain which it
> uses
> is also deprecated in RHV.

I guess there is no point in fixing this code to use the correct value at
this point.

> As for why this error has suddenly appeared, I'm not sure why but it has
> to be because of some change in RHV to do with handling of ESDs.

The error was exposed in 4.3 since we started to validate the disk type 
when creating new volumes. Older versions of vdsm were writing the value
as is to storage without any validation.

Since we have corrupted metadata files in existing export domains, I think
we can workaround this issue by accepting also DISKTYPE=1.

--- Additional comment from Nir Soffer on 2019-09-02 18:13:03 CEST ---

Tal, this can be fixed with a trivial patch, targeting to 4.3.6.

--- Additional comment from Richard W.M. Jones on 2019-09-02 18:18:29 CEST ---

(In reply to Nir Soffer from comment #30)
> Since we have corrupted metadata files in existing export domains, I think
> we can workaround this issue by accepting also DISKTYPE=1.

I should say that the way -o rhv works is it copies the disks to
the ESD, and then you're supposed to soon afterwards import them
into RHV.  (This of course long predates RHV even having an API).

So the disks shouldn't exist in the ESD for very long.  It may
therefore not be necessary to work around this in RHV.

My question is what should the DISKTYPE field actually contain?  Maybe
we can put the proper data into the .meta file or remove this field
entirely?

--- Additional comment from Nir Soffer on 2019-09-02 18:38:18 CEST ---

(In reply to Richard W.M. Jones from comment #32)
> (In reply to Nir Soffer from comment #30)
> > Since we have corrupted metadata files in existing export domains, I think
> > we can workaround this issue by accepting also DISKTYPE=1.
> 
> I should say that the way -o rhv works is it copies the disks to
> the ESD, and then you're supposed to soon afterwards import them
> into RHV.  (This of course long predates RHV even having an API).
> 
> So the disks shouldn't exist in the ESD for very long.  It may
> therefore not be necessary to work around this in RHV.

It depends on engine, if it deletes the exported vm right after the import,
but based on reports from other users I suspect that the vms are not deleted.
 
> My question is what should the DISKTYPE field actually contain?  Maybe
> we can put the proper data into the .meta file or remove this field
> entirely?

The correct value is "DISKTYPE=2", so this should fix the issue:

diff --git a/v2v/create_ovf.ml b/v2v/create_ovf.ml
index 91ff5198d..9aad5dd15 100644
--- a/v2v/create_ovf.ml
+++ b/v2v/create_ovf.ml
@@ -501,7 +501,7 @@ let create_meta_files output_alloc sd_uuid image_uuids overlays =
       bpf "CTIME=%.0f\n" time;
       bpf "MTIME=%.0f\n" time;
       bpf "IMAGE=%s\n" image_uuid;
-      bpf "DISKTYPE=1\n";
+      bpf "DISKTYPE=2\n";
       bpf "PUUID=00000000-0000-0000-0000-000000000000\n";
       bpf "LEGALITY=LEGAL\n";
       bpf "POOL_UUID=\n";

But it will not help with existing images, or with engine database containing
the invalid value "1" for imported disks.

--- Additional comment from Richard W.M. Jones on 2019-09-02 22:22:08 CEST ---

Thanks.  Whether or not we also need a fix in RHV, this is now fixed in
virt-v2v in commit fcfdbc9420b07e3003df38481afb9ccd22045e1a (virt-v2v >= 1.41.5).

Comment 3 liuzi 2019-09-11 02:46:07 UTC
Created attachment 1613877 [details]
import successfully

Comment 4 liuzi 2019-09-11 02:50:07 UTC
Verify bug with builds:
ovirt-engine-4.3.6.5-0.1.el7.noarch
vdsm-4.30.30-1.el7ev.x86_64

Steps:
1.Convert a guests from VMware to RHV's export domain by virt-v2v
# virt-v2v -ic xen+ssh://root.3.21 xen-hvm-rhel6.7-x86_64 -of raw  -o rhv -os 10.73.224.199:/home/p2v_export --password-file /tmp/passwd
[   0.0] Opening the source -i libvirt -ic xen+ssh://root.3.21 xen-hvm-rhel6.7-x86_64
[   0.8] Creating an overlay to protect the source from being modified
[   1.3] Opening the overlay
[   8.4] Inspecting the overlay
[  28.9] Checking for sufficient free disk space in the guest
[  28.9] Estimating space required on target for each disk
[  28.9] Converting Red Hat Enterprise Linux Server release 6.7 Beta (Santiago) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el6’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 113.9] Mapping filesystem data to avoid copying unused and blank areas
[ 114.3] Closing the overlay
[ 114.3] Assigning disks to buses
[ 114.3] Checking if the guest needs BIOS or UEFI to boot
[ 114.3] Initializing the target -o rhv -os 10.73.224.199:/home/p2v_export
[ 114.6] Copying disk 1/1 to /tmp/v2v.nnrxYa/e7cd32d9-6b7d-4be9-ad0f-3fb7cfeeea3b/images/159b057d-815a-47f5-a9f9-abba2c59c3ac/73b64bef-8d41-4abb-9dec-ebdfa9794d94 (raw)
    (100.00/100%)
[ 803.8] Creating output metadata
[ 803.9] Finishing off


2.Try to import the guest from export domain to data domain

3.The guest can be imported successfully,and after importing the guest can boot normally.pls refer to screenshot

Result:
Can import guest from export domain to data domain,so change the bug from ON_QA to VERIFIED.

Comment 6 errata-xmlrpc 2020-03-31 19:55:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1082


Note You need to log in before you can comment on or make changes to this bug.