RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1986386 - Improve error message when converting to rhv using the wrong data domain name which has a specific character
Summary: Improve error message when converting to rhv using the wrong data domain name...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virt-v2v
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Vera
URL:
Whiteboard: V2V_RHV_INT
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-27 12:29 UTC by zhoujunqin
Modified: 2023-11-07 09:27 UTC (History)
9 users (show)

Fixed In Version: virt-v2v-2.3.4-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-07 08:28:57 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2023:6376 0 None None None 2023-11-07 08:29:15 UTC

Description zhoujunqin 2021-07-27 12:29:49 UTC
Description of problem:
Improve error message when converting vm to rhv using the wrong data domain name which has a specific character

Version-Release number of selected component (if applicable):
virt-v2v-1.42.0-14.module+el8.5.0+11846+77888a74.x86_64
libvirt-7.5.0-1.module+el8.5.0+11664+59f87560.x86_64
virtio-win-1.9.17-3.el8_4.noarch
qemu-kvm-6.0.0-25.module+el8.5.0+11890+8e7c3f51.x86_64

RHV - Software Version:4.4.8.1-0.9.el8ev

How reproducible:
100%

Steps to Reproduce:
1. Convert a guest from VMware to RHV(rhv-upload) by v2v using the wrong data domain name which has a specific character.

# virt-v2v -ic vpx://vsphere.local%5cAdministrator.198.169/data/10.73.199.217/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib/ -io  vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78  -o rhv-upload -of qcow2 -oc https://10.73.196.73/ovirt-engine/api/ -ip /home/passwd -op /home/rhvpasswd  -os "data3*" -n ovirtmgmt  esx7.0-rhel8.4-x86_64  -oo rhv-direct=false  -oo rhv-cluster=Default3
Traceback (most recent call last):
  File "/tmp/v2v.dpwDC4/rhv-upload-precheck.py", line 85, in <module>
    storage_domain = [sd for sd in storage_domains if sd.name == params['output_storage']][0]
IndexError: list index out of range
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


Actual results:
As the description.

Expected results:
virt-v2v should report clear error likes as following:
...
Virt-v2v should report below cluster error before copying disk during conversion:
Traceback (most recent call last):
  File "/var/tmp/v2v.SsuwgG/rhv-upload-precheck.py", line 77, in <module>
    (params['output_storage']))
RuntimeError: The storage domain ‘’data*‘’ does not exist
virt-v2v: error: failed server prechecks, see earlier errors

Refer to https://bugzilla.redhat.com/show_bug.cgi?id=1612653#c35

Additional info:
virt-v2v debug log:
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.198.169/data/10.73.199.217/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib/ -io  vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78  -o rhv-upload -of qcow2 -oc https://10.73.196.73/ovirt-engine/api/ -ip /home/passwd -op /home/rhvpasswd  -os "data3*" -n ovirtmgmt  esx7.0-rhel8.4-x86_64  -oo rhv-direct=false  -oo rhv-cluster=Default3 -v -x
virt-v2v: virt-v2v 1.42.0rhel=8,release=14.module+el8.5.0+11846+77888a74 (x86_64)
libvirt version: 7.5.0
/usr/libexec/platform-python '-c' 'import ovirtsdk4'
nbdkit --dump-config
nbdkit version: 1.24.0
nbdkit python '/tmp/v2v.HemFJN/rhv-upload-plugin.py' --dump-plugin >/dev/null
/usr/libexec/platform-python '/tmp/v2v.EOdffL/rhv-upload-precheck.py' '/tmp/v2v.EOdffL/params1.json'
Traceback (most recent call last):
  File "/tmp/v2v.EOdffL/rhv-upload-precheck.py", line 85, in <module>
    storage_domain = [sd for sd in storage_domains if sd.name == params['output_storage']][0]
IndexError: list index out of range
virt-v2v: error: failed server prechecks, see earlier errors
rm -rf '/tmp/v2v.Cpeb3L'
rm -rf '/tmp/v2v.IE8fcM'
rm -rf '/tmp/v2v.HemFJN'
rm -rf '/tmp/v2v.SXMUEK'
rm -rf '/tmp/v2v.EOdffL'
rm -rf '/tmp/rhvupload.P9lPvK'
rm -rf '/var/tmp/null.3txeFN'

Comment 1 Richard W.M. Jones 2021-07-27 12:35:49 UTC
Nir Soffer wrote:

> > Traceback (most recent call last):                                          
> >   File "/tmp/v2v.bTPSGc/rhv-upload-precheck.py", line 85, in <module>       
> >     storage_domain = [sd for sd in storage_domains if sd.name == params     
> > ['output_storage']][0]                                                      
> > IndexError: list index out of range                                         

This is not a proper way to check the response. We need to check this
case and fail with good error message about missing storage domain,
similar to how we handle missing host:
https://github.com/libguestfs/virt-v2v/blob/0486dbe7348dc5835f4c06e6535c56ca2fe8
f38c/v2v/rhv-upload-plugin.py#L434

But the issue seems to be here:
https://github.com/libguestfs/virt-v2v/blob/0486dbe7348dc5835f4c06e6535c56ca2fe8
f38c/v2v/rhv-upload-precheck.py#L62

We do a search using:

    search='storage.name=data*'

This treats data* as a glob pattern, so it will match any storage domain
starting with "data".

I don't know where search pattern syntax is documented, but there is example
code here showing how to search for storage domains:
https://gerrit.ovirt.org/c/ovirt-engine-sdk/+/115896/1/sdk/examples/list_storage
_domains.py

Testing show that "\*" disable the * meta character:

$ ./list_storage_domains.py -c engine-dev -s 'name=nfs*'
[
  {
    "name": "nfs-00",
    "id": "8ece2aae-5c72-4a5c-b23b-74bae65c88e1",
    "type": "data"
  },                                                                            
  ...
]

$ ./list_storage_domains.py -c engine-dev -s 'name=nfs\*'
[]

So we can probably fix the search using:

    glob.escape(params['output_storage'])

But, a storage domain name can contain only "a-z A-Z 0-9 _ -", so we can
fail without calling the server if output_storage contains invalid character.

Finally I don't know why we search for data center when we validate the
storage domain name. Maybe this was copied from rhv-upload-plugin.py.
I think we need to find the storage domain for validating the storage
domain argument.

Comment 2 Eric Hadley 2021-09-08 16:50:13 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 4 Richard W.M. Jones 2023-01-26 12:32:26 UTC
Patch posted:
https://listman.redhat.com/archives/libguestfs/2023-January/030525.html
https://listman.redhat.com/archives/libguestfs/2023-January/030524.html

Moving to RHEL 9.3 because the fix is risky.

Comment 5 Richard W.M. Jones 2023-01-27 12:27:19 UTC
Second attempt:
https://listman.redhat.com/archives/libguestfs/2023-January/030529.html

Comment 7 Vera 2023-04-26 03:01:31 UTC
Verified with the versions:
libvirt-9.2.0-1.el9.x86_64
libguestfs-1.50.1-3.el9.x86_64
qemu-kvm-8.0.0-1.el9.x86_64
virt-v2v-2.3.4-1.el9.x86_64

Steps:
Convert the guest with the wrong os domain name.

# virt-v2v -ic vpx://root.212.149/data/10.73.212.36/?no_verify=1 -o rhv-upload -of raw -os nfs_data* -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /v2v-ops/rhvpasswd  -oo rhv-cafile=/v2v-ops/ca22.pem -oo rhv-cluster=Default3 esx8.0-opensuse42.3-x86_64  -it vddk -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=D1:03:96:7E:11:3D:7C:4C:B6:50:28:1B:63:74:B5:40:5F:9D:9F:94 -ip /v2v-ops/esxpw
[   0.1] Setting up the source: -i libvirt -ic vpx://root.212.149/data/10.73.212.36/?no_verify=1 -it vddk esx8.0-opensuse42.3-x86_64
[   1.9] Opening the source
[   7.6] Inspecting the source
[  20.5] Checking for sufficient free disk space in the guest
[  20.5] Converting openSUSE Leap 42.3 to run on KVM
virt-v2v: The QEMU Guest Agent will be installed for this guest at first 
boot.
virt-v2v: This guest has virtio drivers installed.
[ 102.5] Mapping filesystem data to avoid copying unused and blank areas
[ 103.1] Closing the overlay
[ 103.4] Assigning disks to buses
[ 103.4] Checking if the guest needs BIOS or UEFI to boot
[ 103.4] Setting up the destination: -o rhv-upload -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data*
Traceback (most recent call last):
  File "/tmp/v2v.YvLUMO/rhv-upload-precheck.py", line 57, in <module>
    raise RuntimeError("The storage domain (-os) parameter ‘%s’ is not valid" %
RuntimeError: The storage domain (-os) parameter ‘nfs_data*’ is not valid
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


Moving to Verified:Tested.

Comment 10 Vera 2023-05-06 08:08:37 UTC
Verified again with the same versions:
qemu-kvm-8.0.0-1.el9.x86_64
libvirt-9.2.0-1.el9.x86_64
virt-v2v-2.3.4-1.el9.x86_64
libguestfs-1.50.1-4.el9.x86_64

# virt-v2v -ic vpx://root.212.149/data/cluster/10.73.212.36/?no_verify=1 -o rhv-upload -of raw -os nfs_data* -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /v2v-ops/rhvpasswd  -oo rhv-cafile=/v2v-ops/ca22.pem -oo rhv-cluster=Default3 esx8.0-rhel8.6-x86_64  -it vddk -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=D1:03:96:7E:11:3D:7C:4C:B6:50:28:1B:63:74:B5:40:5F:9D:9F:94 -ip /v2v-ops/esxpw
[   0.0] Setting up the source: -i libvirt -ic vpx://root.212.149/data/cluster/10.73.212.36/?no_verify=1 -it vddk esx8.0-rhel8.6-x86_64
[   1.8] Opening the source
[  10.7] Inspecting the source
[  29.1] Checking for sufficient free disk space in the guest
[  29.1] Converting Red Hat Enterprise Linux 8.6 (Ootpa) to run on KVM
virt-v2v: The QEMU Guest Agent will be installed for this guest at first 
boot.
virt-v2v: This guest has virtio drivers installed.
[ 158.4] Mapping filesystem data to avoid copying unused and blank areas
[ 159.1] Closing the overlay
[ 159.4] Assigning disks to buses
[ 159.4] Checking if the guest needs BIOS or UEFI to boot
[ 159.4] Setting up the destination: -o rhv-upload -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data*
Traceback (most recent call last):
  File "/tmp/v2v.MzzAUy/rhv-upload-precheck.py", line 57, in <module>
    raise RuntimeError("The storage domain (-os) parameter ‘%s’ is not valid" %
RuntimeError: The storage domain (-os) parameter ‘nfs_data*’ is not valid
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Marking as Verified.

Comment 12 errata-xmlrpc 2023-11-07 08:28:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt-v2v bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:6376


Note You need to log in before you can comment on or make changes to this bug.