RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2001323 - Libvirt cannot get disk info of the guest installed on vmware when disk Minor device number >15
Summary: Libvirt cannot get disk info of the guest installed on vmware when disk Minor...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: mxie@redhat.com
URL:
Whiteboard:
Depends On: 1738392
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-09-05 14:14 UTC by John Ferlan
Modified: 2022-05-17 13:03 UTC (History)
16 users (show)

Fixed In Version: libvirt-7.7.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1738392
Environment:
Last Closed: 2022-05-17 12:45:08 UTC
Type: Bug
Target Upstream Version: 7.7.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-96253 0 None None None 2021-09-05 14:15:37 UTC
Red Hat Product Errata RHBA-2022:2390 0 None None None 2022-05-17 12:45:33 UTC

Description John Ferlan 2021-09-05 14:14:49 UTC
+++ This bug was initially created as a clone of Bug #1738392 +++

Description of problem:
Libvirt cannot get disk info of the guest installed on vmware when guest's Minor device number >15

Version-Release number of selected component (if applicable):
libvirt-5.5.0-2.module+el8.1.0+3773+7dd501bf.x86_64
qemu-kvm-4.0.0-6.module+el8.1.0+3736+a2aefea3.x86_64

How reproducible:
100%

Steps to reproduce:
1.Prepare a guest installed on the esx6.7, and add a disk with Minor device number 16 to the guest.
# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0    7G  0 disk
├─sda1          8:1    0    1G  0 part /boot
└─sda2          8:2    0    6G  0 part
  ├─rhel-root 253:0    0  5.3G  0 lvm  /
  └─rhel-swap 253:1    0  716M  0 lvm  [SWAP]
sdb             8:16   0    1G  0 disk
sr0            11:0    1 1024M  0 rom 

2.Check the guest's xml info
#  virsh -c vpx://root.73.141/data/10.73.75.219/?no_verify=1 dumpxml esx6.7-rhel7.7-x86_64
Enter root's password for 10.73.73.141:
<domain type='vmware' id='705' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'>
  <name>esx6.7-rhel7.7-x86_64</name>
  <uuid>422c0152-63ab-cd03-9650-4301ae77aefd</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <cputune>
    <shares>1000</shares>
  </cputune>
  <os>
    <type arch='x86_64'>hvm</type>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <disk type='file' device='disk'>
      <source file='[esx6.7] esx6.7-rhel7.7-x86_64/esx6.7-rhel7.7-x86_64-000004.vmdk'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='vmpvscsi'/>
    <interface type='bridge'>
      <mac address='00:50:56:ac:3e:a1'/>
      <source bridge='VM Network'/>
      <model type='vmxnet3'/>
    </interface>
    <video>
      <model type='vmvga' vram='8192' primary='yes'/>
    </video>
  </devices>
  <vmware:datacenterpath>data</vmware:datacenterpath>
  <vmware:moref>vm-705</vmware:moref>
</domain>

Actual results:
As above description

Expected results:
Libvirt can get disk info when guest's Minor device number >15

Additional info:
1.The bug cannot be reproduced when disk's Minor number <=15,
2.The bug can be reproduced on rhel7 host.
3.The bug can be reproduced on the guests which meet the condition that one SCSI controller can control more than 16 disks.

--- Additional comment from Richard W.M. Jones on 2019-08-07 09:23:22 UTC ---

Could you grab the .vmx file from the VMware instance?  (I would do it myself
but I coudn't guess the root password of the ESXi machine)

To do this you will have to go to https://10.73.75.219/folder and
enter the root password when requested.

Then navigate through the folders until you find the guest directory, and
there should be a .vmx file in there.

--- Additional comment from liuzi on 2019-08-07 10:08:04 UTC ---

(In reply to Richard W.M. Jones from comment #1)
> Could you grab the .vmx file from the VMware instance?  (I would do it myself
> but I coudn't guess the root password of the ESXi machine)
> 
> To do this you will have to go to https://10.73.75.219/folder and
> enter the root password when requested.
> 
> Then navigate through the folders until you find the guest directory, and
> there should be a .vmx file in there.

Hi Richard,
I added the .vmx file named “esx6.7-rhel7.7-x86_64.vmx” as an attachment when I filed the bug, pls find it in attachments.thanks!

--- Additional comment from Richard W.M. Jones on 2019-08-07 10:19:04 UTC ---

Oops, sorry didn't see that :-/

I can confirm this bug happens with libvirt 5.5.0-2.fc31 in Fedora too, using:

$ virsh -c 'esx://root.72.61?no_verify=1' domxml-from-native vmware-vmx esx6.7-rhel7.7-x86_64.vmx 
(root password: 123qweP)

--- Additional comment from RHEL Program Management on 2020-10-20 11:04:23 UTC ---

pm_ack is no longer used for this product. The flag has been reset.

See https://issues.redhat.com/browse/PTT-1821 for additional details or contact lmiksik if you have any questions.

--- Additional comment from RHEL Program Management on 2021-02-13 07:34:54 UTC ---

30-day auto-close warning: This bz has been open for an extended time without being approved for a release (has a release+ or zstream+ flag) .  Please consider prioritizing the work appropriately to get it approved for a release, or close the bz.  Otherwise, if it is still open on the “Stale date”, it will close automatically (CLOSED WONTFIX).

--- Additional comment from RHEL Program Management on 2021-03-15 07:38:13 UTC ---

After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

--- Additional comment from Richard W.M. Jones on 2021-03-15 10:29:44 UTC ---

It's still a bug, what makes this "stale" bug process think otherwise?

--- Additional comment from Red Hat Bugzilla on 2021-07-01 12:44:51 UTC ---

remove performed by PnT Account Manager <pnt-expunge>

--- Additional comment from Michal Privoznik on 2021-08-03 18:42:10 UTC ---

Patches posted upstream:

https://listman.redhat.com/archives/libvir-list/2021-August/msg00038.html

--- Additional comment from Michal Privoznik on 2021-08-16 12:24:59 UTC ---

Merged upstream as:

32f7db0989 vmx: Support super wide SCSI bus
5c254bb541 conf: Store SCSI bus length in virDomainDef
48344c640f vmx: Drop needless check in virVMXParseDisk()
d628c5ded1 vmx: Rework disk def allocation
de1829059a vmx2xmltest: Add a test case
5e16038284 vmx: Fill virtualHW.version to ESX version mapping

v7.6.0-133-g32f7db0989

Comment 1 mxie@redhat.com 2021-09-13 11:43:55 UTC
Reproduce the bug with libvirt-client-7.6.0-2.el9.x86_64

Steps to reproduce:
1.Prepare a guest with more than 16 disks on VMware ESXi host, then use virsh to dump the libvirtxml of the guest, 
Can see only 16 disks are shown in guest libvirtxml after dumping by virsh

# virsh -c vpx://root.198.169/data/10.73.199.217/?no_verify=1 dumpxml Auto-esx7.0-rhel8.5-with-more-than-16disks
Enter root's password for 10.73.198.169: 
<domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'>
  <name>Auto-esx7.0-rhel8.5-with-more-than-16disks</name>
  <uuid>4203a96c-ea55-e026-04f3-b690e22ca349</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <cputune>
    <shares>1000</shares>
  </cputune>
  <os>
    <type arch='x86_64'>hvm</type>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks.vmdk'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_2.vmdk'/>
      <target dev='sdb' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_3.vmdk'/>
      <target dev='sdc' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_4.vmdk'/>
      <target dev='sdd' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_5.vmdk'/>
      <target dev='sde' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='4'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_6.vmdk'/>
      <target dev='sdf' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='5'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_7.vmdk'/>
      <target dev='sdg' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='6'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_8.vmdk'/>
      <target dev='sdh' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='8'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_9.vmdk'/>
      <target dev='sdi' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='9'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_10.vmdk'/>
      <target dev='sdj' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='10'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_11.vmdk'/>
      <target dev='sdk' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='11'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_12.vmdk'/>
      <target dev='sdl' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='12'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_13.vmdk'/>
      <target dev='sdm' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='13'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_14.vmdk'/>
      <target dev='sdn' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='14'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_15.vmdk'/>
      <target dev='sdo' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='15'/>
    </disk>
    <controller type='scsi' index='0' model='vmpvscsi'/>
    <interface type='bridge'>
      <mac address='00:50:56:83:90:c1' type='generated'/>
      <source bridge='VM Network'/>
      <model type='vmxnet3'/>
    </interface>
    <video>
      <model type='vmvga' vram='8192' primary='yes'/>
    </video>
  </devices>
  <vmware:datacenterpath>data</vmware:datacenterpath>
  <vmware:moref>vm-6189</vmware:moref>
</domain>



Test the bug with libvirt-client-7.7.0-1.el9.x86_64

Steps:
1.Prepare a guest with more than 16 disks on VMware ESXi host, then use virsh to dump the libvirtxml of the guest
# virsh -c vpx://root.198.169/data/10.73.199.217/?no_verify=1 dumpxml Auto-esx7.0-rhel8.5-with-more-than-16disks
Enter root's password for 10.73.198.169: 
<domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'>
  <name>Auto-esx7.0-rhel8.5-with-more-than-16disks</name>
  <uuid>4203a96c-ea55-e026-04f3-b690e22ca349</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <cputune>
    <shares>1000</shares>
  </cputune>
  <os>
    <type arch='x86_64'>hvm</type>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks.vmdk'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_2.vmdk'/>
      <target dev='sdb' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_3.vmdk'/>
      <target dev='sdc' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_4.vmdk'/>
      <target dev='sdd' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_5.vmdk'/>
      <target dev='sde' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='4'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_6.vmdk'/>
      <target dev='sdf' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='5'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_7.vmdk'/>
      <target dev='sdg' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='6'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_8.vmdk'/>
      <target dev='sdh' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='8'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_9.vmdk'/>
      <target dev='sdi' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='9'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_10.vmdk'/>
      <target dev='sdj' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='10'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_11.vmdk'/>
      <target dev='sdk' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='11'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_12.vmdk'/>
      <target dev='sdl' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='12'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_13.vmdk'/>
      <target dev='sdm' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='13'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_14.vmdk'/>
      <target dev='sdn' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='14'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_15.vmdk'/>
      <target dev='sdo' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='15'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_16.vmdk'/>
      <target dev='sdp' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='16'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_17.vmdk'/>
      <target dev='sdq' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='17'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_18.vmdk'/>
      <target dev='sdr' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='18'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_19.vmdk'/>
      <target dev='sds' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='19'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_20.vmdk'/>
      <target dev='sdt' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='20'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_21.vmdk'/>
      <target dev='sdu' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='21'/>
    </disk>
    <controller type='scsi' index='0' model='vmpvscsi'/>
    <interface type='bridge'>
      <mac address='00:50:56:83:90:c1' type='generated'/>
      <source bridge='VM Network'/>
      <model type='vmxnet3'/>
    </interface>
    <video>
      <model type='vmvga' vram='8192' primary='yes'/>
    </video>
  </devices>
  <vmware:datacenterpath>data</vmware:datacenterpath>
  <vmware:moref>vm-6189</vmware:moref>
</domain>

Result:
    Virsh can dump all disks(more than 16) of guest from VMware now

Comment 4 mxie@redhat.com 2021-10-21 11:26:49 UTC
Verify the bug with libvirt-client-7.8.0-1.el9.x86_64

Steps:
1.Prepare a guest with more than 16 disks on VMware ESXi host, then use virsh to dump the libvirtxml of the guest
# virsh -c vpx://root.198.169/data/10.73.199.217/?no_verify=1 dumpxml Auto-esx7.0-rhel8.5-with-more-than-16disks
Enter root's password for 10.73.198.169: 
<domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'>
  <name>Auto-esx7.0-rhel8.5-with-more-than-16disks</name>
  ....
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_2.vmdk'/>
      <target dev='sdb' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_3.vmdk'/>
      <target dev='sdc' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    .....
    .....
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_20.vmdk'/>
      <target dev='sdt' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='20'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[datastore1] Auto-esx7.0-rhel8.5-with-16disks/Auto-esx7.0-rhel8.5-with-16disks_21.vmdk'/>
      <target dev='sdu' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='21'/>
    </disk>
    .....

Result:
    Virsh can dump all disks(more than 16) of guest from VMware, move the bug from ON_QA to VERIFIED

Comment 6 errata-xmlrpc 2022-05-17 12:45:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: libvirt), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2390


Note You need to log in before you can comment on or make changes to this bug.