RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2069768 - Import of OVA fails if the user/group name contains spaces
Summary: Import of OVA fails if the user/group name contains spaces
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virt-v2v
Version: 9.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: zhoujunqin
URL:
Whiteboard:
Depends On: 2059287
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-29 16:11 UTC by Jiří Sléžka
Modified: 2022-11-15 10:23 UTC (History)
8 users (show)

Fixed In Version: virt-v2v-2.0.2-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-15 09:56:05 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
import-e38f63c5-d0f3-4996-b40d-d302e1864038-20220330T070833.log (1.55 KB, text/plain)
2022-03-30 11:54 UTC, zhoujunqin
no flags Details
Debug log virt_v2v_ova.log for test scenario-2 (1.52 MB, text/plain)
2022-04-08 14:26 UTC, zhoujunqin
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-117167 0 None None None 2022-03-29 16:13:26 UTC
Red Hat Product Errata RHSA-2022:7968 0 None None None 2022-11-15 09:56:20 UTC

Description Jiří Sléžka 2022-03-29 16:11:34 UTC
Description of problem:

Import of eset appliance in RHV 4.4.10.6-0.1.el8ev fails. But I was able untar the ova file manualy, convert vmdk file to qcow2 and upload disk to ovirt without problem.

Version-Release number of selected component (if applicable):

RHV 4.4.10.6-0.1.el8ev
virt-v2v-1.42.0-16.module+el8.5.0+13900+a08c0464.x86_64

How reproducible:


Steps to Reproduce:
1. Download protect_appliance.ova file from https://www.eset.com/cz/firmy/stahnout/eset-protect/#virtual
2. Upload ova file to one of hosts and click Compute > Virtual Machines > Import in manager and try to import this file

Actual results:

It fails with error (/var/log/vdsm/import/import-608ead26-2648-40dc-a93e-e61c4364185a-20220328T162016.log)

...
tar -tf '/vm/protect_appliance.ova'
tar -xf '/vm/protect_appliance.ova' -C '/var/tmp/ova.gaRMEH' 'PROTECT_Appliance.ovf' 'PROTECT_Appliance.mf'
ova: processing manifest file /var/tmp/ova.gaRMEH/PROTECT_Appliance.mf
tar xOf '/vm/protect_appliance.ova' 'PROTECT_Appliance-disk1.vmdk' | sha1sum
tar xOf '/vm/protect_appliance.ova' 'PROTECT_Appliance.ovf' | sha1sum
ova: testing if PROTECT_Appliance-disk1.vmdk exists in /vm/protect_appliance.ova
ova: file exists
tar tRvf '/vm/protect_appliance.ova'
{ "message": "file ‘PROTECT_Appliance-disk1.vmdk’ not found in the ova", timestamp": "2022-03-28T16:20:21.306701048+02:00", "type": "error" }
virt-v2v: error: file ‘PROTECT_Appliance-disk1.vmdk’ not found in the ova
...


Expected results:

Successful import

Additional info:

tar tRvf '/vm/protect_appliance.ova'

block 0: -rw-r--r-- eraautobuilds/Domain Users 33508 2021-11-04 18:48 PROTECT_Appliance.ovf
block 67: -rw-r--r-- eraautobuilds/Domain Users   147 2021-11-04 18:48 PROTECT_Appliance.mf
block 69: -rwxrwx--- Administrators/Domain Users 2414 2021-11-04 18:55 PROTECT_Appliance.cert
block 75: -rw-r--r-- eraautobuilds/Domain Users 2632183808 2021-11-04 18:48 PROTECT_Appliance-disk1.vmdk
block 5141060: ** Block of NULs **

Comment 1 zhoujunqin 2022-03-30 11:54:14 UTC
Created attachment 1869384 [details]
import-e38f63c5-d0f3-4996-b40d-d302e1864038-20220330T070833.log

Reproduce with steps in Comment 0.

Package version:

virt-v2v-1.42.0-18.module+el8.6.0+14480+c0a3aa0f.x86_64
libvirt-8.0.0-5.module+el8.6.0+14480+c0a3aa0f.x86_64
qemu-kvm-6.2.0-10.module+el8.6.0+14540+5dcf03db.x86_64
vdsm-4.40.100.2-1.el8ev.x86_64

RHV Server - Software Version:4.4.10.7-0.4.el8ev

Import log - import-e38f63c5-d0f3-4996-b40d-d302e1864038-20220330T070833.log

Comment 2 zhoujunqin 2022-03-30 12:04:28 UTC
(In reply to zhoujunqin from comment #1)
> Created attachment 1869384 [details]
> import-e38f63c5-d0f3-4996-b40d-d302e1864038-20220330T070833.log
> 
> Reproduce with steps in Comment 0.
> 
> Package version:
> 
> virt-v2v-1.42.0-18.module+el8.6.0+14480+c0a3aa0f.x86_64
> libvirt-8.0.0-5.module+el8.6.0+14480+c0a3aa0f.x86_64
> qemu-kvm-6.2.0-10.module+el8.6.0+14540+5dcf03db.x86_64
> vdsm-4.40.100.2-1.el8ev.x86_64
> 
> RHV Server - Software Version:4.4.10.7-0.4.el8ev
> 
> Import log - import-e38f63c5-d0f3-4996-b40d-d302e1864038-20220330T070833.log

I can successfully import other OVA files, so I think the bug issue is only targeted on the specific OVA file, thanks.

Comment 3 Richard W.M. Jones 2022-04-04 09:43:20 UTC
I downloaded the OVA (protect_appliance.ova) and unpacked it to see what it contains:

$ tar xvf protect_appliance.ova
PROTECT_Appliance.ovf
PROTECT_Appliance.mf
PROTECT_Appliance.cert
PROTECT_Appliance-disk1.vmdk
$ qemu-img convert PROTECT_Appliance-disk1.vmdk -O raw PROTECT_Appliance-disk1.raw
$ guestfish --ro -a PROTECT_Appliance-disk1.raw -i
Operating system: CentOS Linux release 7.9.2009 (Core)
/dev/centos_ba-eraappl-v/root mounted on /
/dev/sda1 mounted on /boot
/dev/centos_ba-eraappl-v/home mounted on /home

So this is a supported guest type and should be importable
(https://access.redhat.com/articles/1351473).

The actual error though is a new one, and a real one in virt-v2v ...

$ tar tRvf protect_appliance.ova
...
block 75: -rw-r--r-- eraautobuilds/Domain Users 2632183808 2021-11-04 17:48 PROTECT_Appliance-disk1.vmdk
                                   ^^^^^^^^^^^^

The problem is that the group name contains a space.  When we search the output
of this for the VMDK file, we are not expecting the space (which appears to
add an extra field to the output):

https://github.com/libguestfs/virt-v2v/blob/482e74bb56a693758032b7566d5915f9e5531688/input/OVA.ml#L403

So yes this is a bug in virt-v2v.

Here's a simple reproducer:

$ virt-v2v -i ova protect_appliance.ova -o null
[   0.0] Setting up the source: -i ova protect_appliance.ova
virt-v2v: error: file ‘PROTECT_Appliance-disk1.vmdk’ not found in the 
ova

Comment 4 Richard W.M. Jones 2022-04-04 10:16:20 UTC
Luckily this was fairly easy to fix, and I verified that after this
fix we are able to import your OVA:

https://listman.redhat.com/archives/libguestfs/2022-April/028553.html

Moving to RHEL 9 because there's an easy workaround - simply unpack
the OVA, then repack it (with 'tar cf').  Assuming you don't use usernames
with spaces locally you should end up with an importable OVA.

Comment 6 Richard W.M. Jones 2022-04-04 13:39:33 UTC
Built in C9S:
https://kojihub.stream.rdu2.redhat.com/koji/taskinfo?taskID=1056540

Comment 13 zhoujunqin 2022-04-08 13:25:30 UTC
Test the workaround on RHEL-8.6(Registered as an RHV NODE) as Comment 4 said: PASS

1. Unpack files from the OVA file.
$ tar xvf protect_appliance.ova
PROTECT_Appliance.ovf
PROTECT_Appliance.mf
PROTECT_Appliance.cert
PROTECT_Appliance-disk1.vmdk

$ tar tRvf protect_appliance.ova
block 0: -rw-r--r-- eraautobuilds/Domain Users 33508 2021-11-04 13:48 PROTECT_Appliance.ovf
block 67: -rw-r--r-- eraautobuilds/Domain Users   147 2021-11-04 13:48 PROTECT_Appliance.mf
block 69: -rwxrwx--- Administrators/Domain Users 2414 2021-11-04 13:55 PROTECT_Appliance.cert
block 75: -rw-r--r-- eraautobuilds/Domain Users 2632183808 2021-11-04 13:48 PROTECT_Appliance-disk1.vmdk
block 5141060: ** Block of NULs **

The problem is that the group name(Domain Users) contains a space. 
                                   ^^^^^^^^^^^^^

2. Repack the files again (with 'tar cf') and ensure you don't use usernames with spaces locally.

$ tar cf test.ova *
$ tar tRvf test.ova
block 0: -rwxrwx--- juzhou/juzhou  2414 2021-11-04 13:55 PROTECT_Appliance.cert
block 6: -rw-r--r-- juzhou/juzhou 2632183808 2021-11-04 13:48 PROTECT_Appliance-disk1.vmdk
block 5140991: -rw-r--r-- juzhou/juzhou        147 2021-11-04 13:48 PROTECT_Appliance.mf
block 5140993: -rw-r--r-- juzhou/juzhou      33508 2021-11-04 13:48 PROTECT_Appliance.ovf

There is no space existing in the group name.

3. Change the permission of the new OVA file(test.ova)
# ll /home/test.ova 
-rwxrwxrwx. 1 vdsm kvm 2632232960 Apr  8 08:20 /home/test.ova

4. Operate on RHV Webadmin to create a new VM via importing from the ova file(/home/test.ova)

Test result: The OVA file can be imported to RHV successfully.

Comment 14 zhoujunqin 2022-04-08 14:21:50 UTC
For RHEL 9.0.

1. Reproduce with virt-v2v-2.0.1-1.el9.x86_64

# virt-v2v -i ova protect_appliance.ova -o null
[   1.2] Setting up the source: -i ova protect_appliance.ova
virt-v2v: warning: making OVA directory public readable to work around 
libvirt bug https://bugzilla.redhat.com/1045069
virt-v2v: error: file ‘PROTECT_Appliance-disk1.vmdk’ not found in the 
ova

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Test result: Failed to import OVA file.

2. Verified with virt-v2v-2.0.2-1.el9.x86_64

Test scenario-1: -o null

# virt-v2v -i ova protect_appliance.ova -o null
[   0.0] Setting up the source: -i ova protect_appliance.ova
virt-v2v: warning: making OVA directory public readable to work around 
libvirt bug https://bugzilla.redhat.com/1045069
[   4.5] Opening the source
[  11.5] Inspecting the source
[  16.2] Checking for sufficient free disk space in the guest
[  16.2] Converting CentOS Linux release 7.9.2009 (Core) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 107.5] Mapping filesystem data to avoid copying unused and blank areas
[ 109.4] Closing the overlay
[ 109.7] Assigning disks to buses
[ 109.7] Checking if the guest needs BIOS or UEFI to boot
[ 109.7] Setting up the destination: -o null
[ 110.8] Copying disk 1/1
█ 100% [****************************************]
[ 143.4] Creating output metadata
[ 143.4] Finishing off

Test result - Import OVA file successfully.

Test scenario-2: import the OVA file to RHV

# virt-v2v -i ova protect_appliance.ova -o rhv -os 10.73.196.77:/home/data --bridge ovirtmgmt -on protect_appliance -v -x |& tee>virt_v2v_ova.log
█ 100% [****************************************]

Test result:
2.1 virt-v2v command finished successfully.
2.2 Can't find the VM 'protect_appliance' on RHV Webadmin.
The debug log called: virt_v2v_ova.log

@rjones, could you have me have a look at the test result of scenario-2, thanks?

Comment 15 zhoujunqin 2022-04-08 14:26:33 UTC
Created attachment 1871466 [details]
Debug log virt_v2v_ova.log for test scenario-2

Comment 16 zhoujunqin 2022-04-11 06:29:19 UTC
(In reply to zhoujunqin from comment #14)

> 
> Test scenario-2: import the OVA file to RHV
> 
> # virt-v2v -i ova protect_appliance.ova -o rhv -os 10.73.196.77:/home/data
> --bridge ovirtmgmt -on protect_appliance -v -x |& tee>virt_v2v_ova.log
> █ 100% [****************************************]
> 
> Test result:
> 2.1 virt-v2v command finished successfully.
> 2.2 Can't find the VM 'protect_appliance' on RHV Webadmin.
> The debug log called: virt_v2v_ova.log
> 
> @rjones, could you have me have a look at the test result of scenario-2,
> thanks?

Sorry for your inconvenience, the usage here '-o rhv' is not right here, it should use an export_domain as a target.
And we have an existing Bug 1953286 that tracked my test scenario-2 issue.

I retest test scenario-2 with '-o rhv-upload' option and passed the test, thanks.

# virt-v2v -i ova protect_appliance.ova -o rhv-upload -of qcow2 -oc https://10.73.196.73/ovirt-engine/api -op /home/juzhou/rhvpass -os data_nfs --bridge ovirtmgmt -oo rhv-direct=true
[   0.0] Setting up the source: -i ova protect_appliance.ova
virt-v2v: warning: making OVA directory public readable to work around 
libvirt bug https://bugzilla.redhat.com/1045069
[   4.5] Opening the source
[   8.8] Inspecting the source
[  13.6] Checking for sufficient free disk space in the guest
[  13.6] Converting CentOS Linux release 7.9.2009 (Core) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 104.2] Mapping filesystem data to avoid copying unused and blank areas
[ 106.2] Closing the overlay
[ 106.5] Assigning disks to buses
[ 106.5] Checking if the guest needs BIOS or UEFI to boot
[ 106.5] Setting up the destination: -o rhv-upload -oc https://10.73.196.73/ovirt-engine/api -os data_nfs
[ 118.2] Copying disk 1/1
█ 100% [****************************************]
[ 200.6] Creating output metadata
[ 212.7] Finishing off

Test result: Converting the OVA file to RHV successful, and VM 'PROTECT_Appliance' boots up successfully(Can't do additional checkpoints since I don't know the password of the user).

Comment 17 Richard W.M. Jones 2022-04-11 08:51:15 UTC
Yes that looks correct, thanks for testing.

Comment 18 zhoujunqin 2022-04-14 13:56:46 UTC
In order to easily cover the bug scenario in our automation job in the future, 
I created another OVA file.

The steps to create OVA file are as follows:

1. Operation on vsphere6.7, then navigate to a virtual machine or vApp, and from the Actions menu, select Template > Export OVF Template.

2. In the Name field, enter the name of the template, such as 'ova_bug2069768'.

3. Create a user with a space existing in the name on V2V conversion server.
# useradd space\ user --badname

4. Scp the ovf-related files to the home path of the new user and generate the OVA file.

$ ll ova_bug2069768/
total 4526964
-rw-r--r--. 1 space user space user 2317782016 Apr 14 09:12 ova_bug2069768-1.vmdk
-rw-r--r--. 1 space user space user       8684 Apr 14 09:12 ova_bug2069768-2.nvram
-rw-r--r--. 1 space user space user        286 Apr 14 09:12 ova_bug2069768.mf
-rw-r--r--. 1 space user space user       6526 Apr 14 09:12 ova_bug2069768.ovf


$ tar -cvf ova_bug2069768.ova ova_bug2069768-1.vmdk ova_bug2069768-2.nvram ova_bug2069768.ovf 

$ tar tRvf ova_bug2069768.ova
block 0: -rw-r--r-- space user/space user 2317782016 2022-04-14 09:12 ova_bug2069768-1.vmdk
block 4526919: -rw-r--r-- space user/space user       8684 2022-04-14 09:12 ova_bug2069768-2.nvram
block 4526937: -rw-r--r-- space user/space user       6526 2022-04-14 09:12 ova_bug2069768.ovf
block 4526951: ** Block of NULs **

There is a space in the user/group name.

*******************************************************************************************************
Test: Convert the OVA file to RHV.

Reproduce the bug with version: virt-v2v-2.0.1-1.el9.x86_64

# virt-v2v -i ova  /home/space\ user/ova_bug2069768/ova_bug2069768.ova -o rhv-upload -of qcow2 -oc https://$rhv_engine/ovirt-engine/api -op /home/rhvpass -os data_nfs --bridge ovirtmgmt -oo rhv-direct=true
[   0.0] Setting up the source: -i ova /home/space user/ova_bug2069768/ova_bug2069768.ova
virt-v2v: warning: making OVA directory public readable to work around 
libvirt bug https://bugzilla.redhat.com/1045069
virt-v2v: error: file ‘ova_bug2069768-1.vmdk’ not found in the ova

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]



Verify the bug with version: virt-v2v-2.0.3-1.el9.x86_64

# virt-v2v -i ova  /home/space\ user/ova_bug2069768/ova_bug2069768.ova -o rhv-upload -of qcow2 -oc https://10.73.196.73/ovirt-engine/api -op /home/rhvpass -os data_nfs --bridge ovirtmgmt -oo rhv-direct=true
[   0.7] Setting up the source: -i ova /home/space user/ova_bug2069768/ova_bug2069768.ova
virt-v2v: warning: making OVA directory public readable to work around 
libvirt bug https://bugzilla.redhat.com/1045069
[   1.8] Opening the source
[   6.9] Inspecting the source
[   9.2] Checking for sufficient free disk space in the guest
[   9.2] Converting Red Hat Enterprise Linux 9.0 Beta (Plow) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  59.7] Mapping filesystem data to avoid copying unused and blank areas
[  60.3] Closing the overlay
[  60.5] Assigning disks to buses
[  60.5] Checking if the guest needs BIOS or UEFI to boot
[  60.5] Setting up the destination: -o rhv-upload -oc https://$rhv_engine/ovirt-engine/api -os data_nfs
[  72.5] Copying disk 1/1
█ 100% [****************************************]
[ 119.4] Creating output metadata
[ 127.3] Finishing off

Test result:
I: Conversion finished successfully.
II: The VM in RHV webadmin passed all checkpoints.

So I move the bug from ON_QA to VERIFIED status, thanks.

Comment 20 errata-xmlrpc 2022-11-15 09:56:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: virt-v2v security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7968


Note You need to log in before you can comment on or make changes to this bug.