RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2027598 - There is no guest listed in export domain if use v2v to convert guest to rhv via -o rhv
Summary: There is no guest listed in export domain if use v2v to convert guest to rhv ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virt-v2v
Version: 9.0
Hardware: x86_64
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Laszlo Ersek
QA Contact: Vera
URL:
Whiteboard:
Depends On: 2040609 2040610
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-30 08:34 UTC by mxie@redhat.com
Modified: 2022-05-17 13:43 UTC (History)
14 users (show)

Fixed In Version: virt-v2v-1.45.97-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-17 13:41:56 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
guest-export-domain-after-v2v.png (57.72 KB, image/png)
2021-11-30 08:34 UTC, mxie@redhat.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 2034240 1 unspecified CLOSED calling "get_disk_allocated" in "create_ovf" breaks the rhv-upload output plugin 2023-09-09 15:03:19 UTC
Red Hat Issue Tracker RHELPLAN-104233 0 None None None 2021-11-30 08:36:01 UTC
Red Hat Product Errata RHEA-2022:2566 0 None None None 2022-05-17 13:42:10 UTC

Description mxie@redhat.com 2021-11-30 08:34:09 UTC
Created attachment 1844133 [details]
guest-export-domain-after-v2v.png

Description of problem:
There is no guest listed in export domain if use v2v to convert guest to rhv via -o rhv

Version-Release number of selected component (if applicable):
rhv4.4.8.3-0.10.el8ev
virt-v2v-1.45.91-1.el9.x86_64 
libguestfs-1.46.0-5.el9.x86_64
guestfs-tools-1.46.1-5.el9.x86_64    
nbdkit-1.28.2-2.el9.x86_64
libvirt-libs-7.9.0-1.el9.x86_64
qemu-img-6.1.0-7.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Convert a guest from VMware to rhv4.4 via -o rhv by v2v
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vddk6.5 -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -ip /home/passwd   -o rhv -os 10.73.194.236:/home/nfs_export -b ovirtmgmt  esx6.7-rhel8.4-x86_64 -v -x |& ts  > virt-v2v-1.45.91-1-rhv-export.log
█ 100% [****************************************]

2.Check guest on rhv4.4 web after v2v conversion, there is no disk listed in export domain, please check screenshot "guest-export-domain-after-v2v"

3.SSH to rhv node, found the guest data is existing in nfs storage where export domain is built

# cat 3ce2db8d-3d2a-4817-beea-1e856bfeafc3/3ce2db8d-3d2a-4817-beea-1e856bfeafc3.ovf |grep esx6.7-rhel8.4-x86_64
    <Name>esx6.7-rhel8.4-x86_64</Name>

# cat 3ce2db8d-3d2a-4817-beea-1e856bfeafc3/3ce2db8d-3d2a-4817-beea-1e856bfeafc3.ovf |grep "Drive 1" -A 4
        <rasd:Caption>Drive 1</rasd:Caption>
        <rasd:InstanceId>e80a333c-cf4b-40f9-bc70-7a63778f26b1</rasd:InstanceId>
        <rasd:ResourceType>17</rasd:ResourceType>
        <Type>disk</Type>
        <rasd:HostResource>ce8f5ddb-71e8-44a3-bd4c-37bed44bf767/e80a333c-cf4b-40f9-bc70-7a63778f26b1</rasd:HostResource>

# cat ce8f5ddb-71e8-44a3-bd4c-37bed44bf767/e80a333c-cf4b-40f9-bc70-7a63778f26b1.meta 
DOMAIN=e2409bea-6f48-45e5-804e-6fa3d0660631
VOLTYPE=LEAF
CTIME=1638251371
MTIME=1638251371
IMAGE=ce8f5ddb-71e8-44a3-bd4c-37bed44bf767
DISKTYPE=2
PUUID=00000000-0000-0000-0000-000000000000
LEGALITY=LEGAL
POOL_UUID=
SIZE=25165824
FORMAT=RAW
TYPE=SPARSE
DESCRIPTION=generated by helper-v2v-output 1.45.91rhel_9,release_1.el9
EOF

# qemu-img info ce8f5ddb-71e8-44a3-bd4c-37bed44bf767/e80a333c-cf4b-40f9-bc70-7a63778f26b1
image: ce8f5ddb-71e8-44a3-bd4c-37bed44bf767/e80a333c-cf4b-40f9-bc70-7a63778f26b1
file format: raw
virtual size: 12 GiB (12884901888 bytes)
disk size: 1.77 GiB



Actual results:
As above description

Expected results:
Checkpoints of guest are passed after v2v conversion

Additional info:

Comment 4 Laszlo Ersek 2021-11-30 14:30:28 UTC
As far as I understand the bug report, virt-v2v completes without errors, and on the target host / NFS storage, all artifacts exist that virt-v2v is supposed to create. So I think we should at least look at the logs of the receiving side (RHV); after all, from the data available thus far, the real disconnect appears to be between the artifacts on the target host / NFS storage, and the RHV WebUI.

Comment 7 mxie@redhat.com 2021-11-30 14:53:17 UTC
Actually, I didn't find any useful info or error in vdsm.log and engine.log, anyway, I will upload these logs, besides, I can't reproduce the bug with virt-v2v-1.45.3-3.el9.x86_64

Comment 9 Richard W.M. Jones 2021-11-30 14:54:38 UTC
(In reply to Laszlo Ersek from comment #4)
> As far as I understand the bug report, virt-v2v completes without errors,
> and on the target host / NFS storage, all artifacts exist that virt-v2v is
> supposed to create. So I think we should at least look at the logs of the
> receiving side (RHV); after all, from the data available thus far, the real
> disconnect appears to be between the artifacts on the target host / NFS
> storage, and the RHV WebUI.

This method goes via the RHV Export Storage Domain (ESD).  The ESD feature
of RHV has been deprecated for a while.  Also we never really used the ESD
in a supported way - we reverse engineered how it worked so we could write
the guest in the same format directly in the ESD, which normally you're
not supposed to do.

I wonder though if either RHV ESD has bitrotted somehow, or if the format
of the ESD has been changed.

I agree in general this looks like a RHV problem of some kind, but maybe
one which has been created by virt-v2v's unorthodox use of the ESD.

Comment 13 Richard W.M. Jones 2021-12-01 09:56:00 UTC
mxie, is this a regression in virt-v2v 1.45.9x vs the previous version in RHEL 9
(1.45.3-3: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=1701152)

If it is then it's probably something I messed up when I was reorganizing the
code into output modules and TestBlocker would be justified.

If not then it's a change in RHV an there's nothing much we can do.

Comment 14 mxie@redhat.com 2021-12-01 10:09:50 UTC
(In reply to Richard W.M. Jones from comment #13)
> mxie, is this a regression in virt-v2v 1.45.9x vs the previous version in
> RHEL 9
> (1.45.3-3:
> https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=1701152)
> 
> If it is then it's probably something I messed up when I was reorganizing the
> code into output modules and TestBlocker would be justified.
> 
> If not then it's a change in RHV an there's nothing much we can do.

Yes, I think it's a regression because I can't reproduce the bug with virt-v2v-1.45.3-3 which have mentioned in comment7

Comment 15 Richard W.M. Jones 2021-12-01 10:52:18 UTC
Sorry I missed your earlier comment.  Yes, this is a bug in virt-v2v in that case,
probably introduced when I refactored the code into modules.

I'm not sure what TestBlocker implies here, but I'll try to fix this as soon as I can.

Comment 16 Laszlo Ersek 2021-12-01 10:57:30 UTC
I've carefully read through the following (great!) manuals:
- https://libguestfs.org/virt-v2v.1.html
- https://libguestfs.org/virt-v2v-output-rhv.1.html

(1) If the legacy rhv output method is deprecated ("The Export Storage
Domain was deprecated in oVirt 4, and so we expect that this method will
stop working at some point in the future."), then why does QE keep
regression-testing it? (Consequently, why analyze the issue on the
developer side, once the bug has been reported?) What's wrong with just
abandoning "rhv" and using "rhv-upload" consistently instead?

(2) We have four log files here:
- virt-v2v-1.45.91-1-rhv-export.log
- vdsm.log
- virt-v2v-1.45.3-3-o-rhv.log
- engine.log

(2.1) I think "vdsm.log" is not even expected to contain anything
useful, becase: "If you use virt-v2v from the RHV-M user interface, then
behind the scenes the import is managed by VDSM using the -o vdsm output
mode (which end users should not try to use directly)". We don't use
RHV-M to start virt-v2v here, and accordingly, the output mode is "rhv",
not "vdsm", so I think "vdsm.log" is irrelevant.

(2.2) On the other hand, "engine.log" *should* be relevant. ("Diagnosing
these failures is infuriatingly difficult as the UI generally hides the
true reason for the failure.") However, looking at "engine.log", I see
*absolutely nothing* related to Ming's attempt (through the WebUI) to
*scan* the Export Storage Domain for newly converted (= imporable) disks
and OVF files. The log only mentions logins, logouts, authentication
stuff, and "not removing session <whatever>, session has running
commands for user <whatever>". What those running commands might
actually be, the log file is real shy about. This log file is *utterly
useless*.

(2.3) Because Ming states that "virt-v2v-1.45.3-3" does not encounter
the issue, I made an attempt to compare the OVF files generated by both
virt-v2v versions. For simplifying the comparison, I've replaced the
following strings in the corresponding OVFs:

- 1.45.3-3:
  - 31183beb-976a-4dee-89f5-5b379b376bd2         -> DISK_DIR
  - 3c46d6b9-8260-4413-94dd-5ae7864c34ca         -> DISK_FILE
  - virt-v2v 1.45.3rhel=9,release=3.el9          -> TOOL
  - 1d5067a2-1ba0-4c74-8dba-fa64622ec7f8         -> SNAPSHOT_ID
  - e5d59941-054e-4be4-bf8d-c40e093f13c8         -> OS_SECTION_ID
  - v2v-1.45.3-3-esx7.0-win2019-x86_64           -> GUEST_NAME
  - 2021/11/30 14:34:42                          -> CREATION_DATE

- 1.45.91-1:
  - ce8f5ddb-71e8-44a3-bd4c-37bed44bf767          -> DISK_DIR
  - e80a333c-cf4b-40f9-bc70-7a63778f26b1          -> DISK_FILE
  - helper-v2v-output 1.45.91rhel=9,release=1.el9 -> TOOL
  - 95e6c366-f24e-49e7-a19a-90cd6c29a735          -> SNAPSHOT_ID
  - 3ce2db8d-3d2a-4817-beea-1e856bfeafc3          -> OS_SECTION_ID
  - esx6.7-rhel8.4-x86_64                         -> GUEST_NAME
  - 2021/11/30 05:56:11                          -> CREATION_DATE

The following is the list of differences:

(2.3.1) ovf:Envelope/References/File

Version 1.45.3-3 generated an attribute

  ovf:size='10703609856'

but version 1.45.91-1 did not.

(2.3.2) ovf:Envelope/Section[@xsi:type='ovf:DiskSection_Type']/Disk

Different disks were converted by 1.45.3-3 and 1.45.91-1, apparent from
these attribute differences:

-     ovf:size='20'
-     ovf:capacity='21474836480'
+     ovf:size='12'
+     ovf:capacity='12884901888'

Still in this element, *only* version 1.45.3-3 generated the following
attribute:

  ovf:actual_size='10'

(2.3.3) ovf:Envelope/Content/Section[@xsi:type='ovf:OperatingSystemSection_Type']

It's now obvious that the two virt-v2v versions were used to convert
distinct guests:

-      <Info>Windows Server 2019 Standard</Info>
-      <Description>windows_2016x64</Description>
+      <Info>Red Hat Enterprise Linux 8.4 (Ootpa)</Info>
+      <Description>rhel_8x64</Description>

(2.3.4) ovf:Envelope/Content/Section[@xsi:type='ovf:VirtualHardwareSection_Type']/Item[1]

*Only* version 1.45.3-3 generated the following element:

  <rasd:threads_per_cpu>1</rasd:threads_per_cpu>

(2.3.5) ovf:Envelope/Content/Section[@xsi:type='ovf:VirtualHardwareSection_Type']/Item[4]

Different video devices:

-        <rasd:Device>qxl</rasd:Device>
+        <Device>vga</Device>

This difference comes from
<https://bugzilla.redhat.com/show_bug.cgi?id=1961107#c31>.

(2.3.6) ovf:Envelope/Content/Section[@xsi:type='ovf:VirtualHardwareSection_Type']/Item[8]

Only version 1.45.91-1 generated the following element:

  <rasd:MACAddress>00:50:56:ac:d0:17</rasd:MACAddress>

(I've ignored all UUID differences in the <rasd:InstanceId> elements,
within the various <Item> elements.)


I don't know if these differences justify RHV to disregard the
importable guest. "engine.log" certainly doesn't indicate anything like
that.

What would perhaps help (with further comparison) is a recursive
directory listing of the Export Storage Domain directory
("10.73.194.236:/home/nfs_export"), after both conversions complete.

Comment 19 Richard W.M. Jones 2021-12-01 11:06:44 UTC
(In reply to Laszlo Ersek from comment #16)
> I've carefully read through the following (great!) manuals:
> - https://libguestfs.org/virt-v2v.1.html
> - https://libguestfs.org/virt-v2v-output-rhv.1.html
> 
> (1) If the legacy rhv output method is deprecated ("The Export Storage
> Domain was deprecated in oVirt 4, and so we expect that this method will
> stop working at some point in the future."), then why does QE keep
> regression-testing it? (Consequently, why analyze the issue on the
> developer side, once the bug has been reported?) What's wrong with just
> abandoning "rhv" and using "rhv-upload" consistently instead?

We still have customers who use this.  It's occasionally useful
for ad hoc imports where the RHV UI method messes up.  In fact we
recommended this to a customer quite recently.  It's a bit marginal
now (-o rhv-upload is arguably as easy to use).  Obviously when
the ESD does actually go away we'll have to remove it.

Anyway if I messed up the conversion to modules then I can fix this.

> (2) We have four log files here:
> - virt-v2v-1.45.91-1-rhv-export.log
> - vdsm.log
> - virt-v2v-1.45.3-3-o-rhv.log
> - engine.log
> 
> (2.1) I think "vdsm.log" is not even expected to contain anything
> useful, becase: "If you use virt-v2v from the RHV-M user interface, then
> behind the scenes the import is managed by VDSM using the -o vdsm output
> mode (which end users should not try to use directly)". We don't use
> RHV-M to start virt-v2v here, and accordingly, the output mode is "rhv",
> not "vdsm", so I think "vdsm.log" is irrelevant.

Right, vdsm.log isn't relevant to this because it's likely
to be a bug in virt-v2v itself, but we didn't know that before.

> (2.2) On the other hand, "engine.log" *should* be relevant. ("Diagnosing
> these failures is infuriatingly difficult as the UI generally hides the
> true reason for the failure.") However, looking at "engine.log", I see
> *absolutely nothing* related to Ming's attempt (through the WebUI) to
> *scan* the Export Storage Domain for newly converted (= imporable) disks
> and OVF files. The log only mentions logins, logouts, authentication
> stuff, and "not removing session <whatever>, session has running
> commands for user <whatever>". What those running commands might
> actually be, the log file is real shy about. This log file is *utterly
> useless*.

I'm not really sure how the ESD actually works.  But it's plausible
that if we're putting the files in the wrong location that the oVirt
engine wouldn't "see" them and there wouldn't be anything to see in
engine.log.

> (2.3) Because Ming states that "virt-v2v-1.45.3-3" does not encounter
> the issue, I made an attempt to compare the OVF files generated by both
> virt-v2v versions. For simplifying the comparison, I've replaced the
> following strings in the corresponding OVFs:
> 
> - 1.45.3-3:
>   - 31183beb-976a-4dee-89f5-5b379b376bd2         -> DISK_DIR
>   - 3c46d6b9-8260-4413-94dd-5ae7864c34ca         -> DISK_FILE
>   - virt-v2v 1.45.3rhel=9,release=3.el9          -> TOOL
>   - 1d5067a2-1ba0-4c74-8dba-fa64622ec7f8         -> SNAPSHOT_ID
>   - e5d59941-054e-4be4-bf8d-c40e093f13c8         -> OS_SECTION_ID
>   - v2v-1.45.3-3-esx7.0-win2019-x86_64           -> GUEST_NAME
>   - 2021/11/30 14:34:42                          -> CREATION_DATE
> 
> - 1.45.91-1:
>   - ce8f5ddb-71e8-44a3-bd4c-37bed44bf767          -> DISK_DIR
>   - e80a333c-cf4b-40f9-bc70-7a63778f26b1          -> DISK_FILE
>   - helper-v2v-output 1.45.91rhel=9,release=1.el9 -> TOOL
>   - 95e6c366-f24e-49e7-a19a-90cd6c29a735          -> SNAPSHOT_ID
>   - 3ce2db8d-3d2a-4817-beea-1e856bfeafc3          -> OS_SECTION_ID
>   - esx6.7-rhel8.4-x86_64                         -> GUEST_NAME
>   - 2021/11/30 05:56:11                          -> CREATION_DATE
> 
> The following is the list of differences:
> 
> (2.3.1) ovf:Envelope/References/File
> 
> Version 1.45.3-3 generated an attribute
> 
>   ovf:size='10703609856'
> 
> but version 1.45.91-1 did not.
> 
> (2.3.2) ovf:Envelope/Section[@xsi:type='ovf:DiskSection_Type']/Disk
> 
> Different disks were converted by 1.45.3-3 and 1.45.91-1, apparent from
> these attribute differences:
> 
> -     ovf:size='20'
> -     ovf:capacity='21474836480'
> +     ovf:size='12'
> +     ovf:capacity='12884901888'
> 
> Still in this element, *only* version 1.45.3-3 generated the following
> attribute:
> 
>   ovf:actual_size='10'

I think I remember dropping this when doing the conversion to modules on
the basis that it's hard when new virt-v2v to get this (it's no longer
generated as a side-effect of other stuff we do) and anyway why does
oVirt need it.  Maybe it does need it ...

> (2.3.3)
> ovf:Envelope/Content/Section[@xsi:type='ovf:OperatingSystemSection_Type']
> 
> It's now obvious that the two virt-v2v versions were used to convert
> distinct guests:
> 
> -      <Info>Windows Server 2019 Standard</Info>
> -      <Description>windows_2016x64</Description>
> +      <Info>Red Hat Enterprise Linux 8.4 (Ootpa)</Info>
> +      <Description>rhel_8x64</Description>
> 
> (2.3.4)
> ovf:Envelope/Content/Section[@xsi:type='ovf:VirtualHardwareSection_Type']/
> Item[1]
> 
> *Only* version 1.45.3-3 generated the following element:
> 
>   <rasd:threads_per_cpu>1</rasd:threads_per_cpu>
> 
> (2.3.5)
> ovf:Envelope/Content/Section[@xsi:type='ovf:VirtualHardwareSection_Type']/
> Item[4]
> 
> Different video devices:
> 
> -        <rasd:Device>qxl</rasd:Device>
> +        <Device>vga</Device>
> 
> This difference comes from
> <https://bugzilla.redhat.com/show_bug.cgi?id=1961107#c31>.
> 
> (2.3.6)
> ovf:Envelope/Content/Section[@xsi:type='ovf:VirtualHardwareSection_Type']/
> Item[8]
> 
> Only version 1.45.91-1 generated the following element:
> 
>   <rasd:MACAddress>00:50:56:ac:d0:17</rasd:MACAddress>
> 
> (I've ignored all UUID differences in the <rasd:InstanceId> elements,
> within the various <Item> elements.)
> 
> 
> I don't know if these differences justify RHV to disregard the
> importable guest. "engine.log" certainly doesn't indicate anything like
> that.
> 
> What would perhaps help (with further comparison) is a recursive
> directory listing of the Export Storage Domain directory
> ("10.73.194.236:/home/nfs_export"), after both conversions complete.

I think a more likely explanation is we're creating the wrong directory
structure.  Anyway, will need investigation later.

Comment 20 Laszlo Ersek 2021-12-02 14:24:24 UTC
Taking this, after discussion with Rich.

Ming Xie, could you please run the following *set* of commands, on the Export Storage Domain directory, in both the successful and the failing case?

  find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U --
  find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U -Z --

We should then diff those listings against each other. Thanks!

Comment 21 Laszlo Ersek 2021-12-02 14:25:24 UTC
Ming Xie,

can you please also make sure that the loglevel of ovirt-engine is set to DEBUG? I think we should be seeing more messages in engine.log, but the log level could be too strict.

https://www.ovirt.org/develop/developer-guide/engine/engine-development-environment.html

Thanks
Laszlo

Comment 22 Laszlo Ersek 2021-12-02 14:28:27 UTC
Found another resource related to the log level: https://access.redhat.com/solutions/435333

Comment 23 Laszlo Ersek 2021-12-02 14:36:43 UTC
In the ovirt-engine repository, I've found two relevant locations:

getEntitiesFromStorageOvfDisk() [backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/storage/StorageHandlingCommandBase.java]
        // Initialize a new ArrayList with all the ovfDisks in the specified Storage Domain,
        // so the entities can be removed from the list every time we register the latest OVF disk and we can keep the
        // ovfDisks cache list updated.

getOvfEntities() [backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/OvfUtils.java]

These functions emit many log messages, yet we see none of them in "engine.log". Something is wrong with the "engine.log". Maybe captured at the wrong time, or incorrect log level setting.

Can I please get access to the RHEV-M server that is supposed to import the converted domain from the ESD? Thanks.

Comment 24 mxie@redhat.com 2021-12-02 16:52:23 UTC
(In reply to Laszlo Ersek from comment #20)
> Taking this, after discussion with Rich.
> 
> Ming Xie, could you please run the following *set* of commands, on the
> Export Storage Domain directory, in both the successful and the failing case?
> 
>   find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U --
>   find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U -Z --
> 
> We should then diff those listings against each other. Thanks!

For failed case:
# find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U --
drwxr-xr-x. 2 36 36   54 Nov 30 13:56 .
-rw-r--r--. 1 36 36 5716 Nov 30 13:56 ./3ce2db8d-3d2a-4817-beea-1e856bfeafc3.ovf

# find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U -Z --
drwxr-xr-x. 2 36 36 system_u:object_r:user_home_t:s0   54 Nov 30 13:56 .
-rw-r--r--. 1 36 36 system_u:object_r:user_home_t:s0 5716 Nov 30 13:56 ./3ce2db8d-3d2a-4817-beea-1e856bfeafc3.ovf

# find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U --
drwxr-xr-x. 2 36 36          99 Nov 30 13:49 .
-rw-rw-rw-. 1 36 36 12884901888 Nov 30 13:56 ./e80a333c-cf4b-40f9-bc70-7a63778f26b1
-rw-r--r--. 1 36 36         326 Nov 30 13:49 ./e80a333c-cf4b-40f9-bc70-7a63778f26b1.meta

# find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U -Z --
drwxr-xr-x. 2 36 36 system_u:object_r:user_home_t:s0          99 Nov 30 13:49 .
-rw-rw-rw-. 1 36 36 system_u:object_r:user_home_t:s0 12884901888 Nov 30 13:56 ./e80a333c-cf4b-40f9-bc70-7a63778f26b1
-rw-r--r--. 1 36 36 system_u:object_r:user_home_t:s0         326 Nov 30 13:49 ./e80a333c-cf4b-40f9-bc70-7a63778f26b1.meta



For passed case:

# find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U --
drwxr-xr-x. 2 36 36   54 Nov 30 23:05 .
-rw-r--r--. 1 36 36 5745 Nov 30 23:05 ./8dbe8c4f-ab58-4dae-b905-51104769107c.ovf


# find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U -Z --
drwxr-xr-x. 2 36 36 system_u:object_r:user_home_t:s0   54 Nov 30 23:05 .
-rw-r--r--. 1 36 36 system_u:object_r:user_home_t:s0 5745 Nov 30 23:05 ./8dbe8c4f-ab58-4dae-b905-51104769107c.ovf

# find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U --
drwxr-xr-x. 2 36 36          99 Nov 30 23:01 .
-rw-rw-rw-. 1 36 36 21474836480 Nov 30 23:05 ./cece3021-c891-4afd-97bc-40784ddc1aeb
-rw-r--r--. 1 36 36         316 Nov 30 23:01 ./cece3021-c891-4afd-97bc-40784ddc1aeb.meta

# find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U -Z --
drwxr-xr-x. 2 36 36 system_u:object_r:user_home_t:s0          99 Nov 30 23:01 .
-rw-rw-rw-. 1 36 36 system_u:object_r:user_home_t:s0 21474836480 Nov 30 23:05 ./cece3021-c891-4afd-97bc-40784ddc1aeb
-rw-r--r--. 1 36 36 system_u:object_r:user_home_t:s0         316 Nov 30 23:01 ./cece3021-c891-4afd-97bc-40784ddc1aeb.meta



By the way, virt-v2v-1.45.3-3 will generate "<rasd:threads_per_cpu>1</rasd:threads_per_cpu>" in ovf file but virt-v2v-1.45.91-1 will not

Comment 25 mxie@redhat.com 2021-12-02 16:53:56 UTC
> Can I please get access to the RHEV-M server that is supposed to import the
> converted domain from the ESD? Thanks.

Hi Laszlo, will send info of rhv env by mail

Comment 26 Laszlo Ersek 2021-12-03 13:10:09 UTC
(In reply to mxie from comment #24)
> (In reply to Laszlo Ersek from comment #20)
> > Taking this, after discussion with Rich.
> > 
> > Ming Xie, could you please run the following *set* of commands, on the
> > Export Storage Domain directory, in both the successful and the failing case?
> > 
> >   find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U --
> >   find $ESD -print0 | sort -z | xargs -0 -r -- ls -l -n -d -U -Z --
> > 
> > We should then diff those listings against each other. Thanks!
> 
> For failed case:
> [...]
> 
> For passed case:
> [...]

Thank you.

The outputs are structurally identical. I replaced the GUIDs again with the strings "OS_SECTION_ID" and "DISK_FILE", and then used "git diff --word-diff" to compare both outputs. The files differ only in modtimes and sizes; again, structurally the outputs are identical.

Therefore, the regression must be in the file(s)'s *contents*, in "OS_SECTION_ID.ovf" and/or "DISK_FILE.meta".

> By the way, virt-v2v-1.45.3-3 will generate "<rasd:threads_per_cpu>1</rasd:threads_per_cpu>" in ovf file but virt-v2v-1.45.91-1 will not

Yes, please see bullet (2.3.4) in comment 16.

Thanks!

Comment 27 Laszlo Ersek 2021-12-03 15:00:11 UTC
After kluding the log setup of ovirt-engine into submission, here's the
problem (rewrapped manually for readability):

2021-12-03 22:18:42,561+08 ERROR [org.ovirt.engine.core.utils.ovf.OvfManager]
  (default task-1) [0f1b2344-47e0-4f2d-9651-2e920510973a]
  Error parsing OVF due to OVF error:
  [Empty Name]: cannot read '//*/Section/Disk' with value: null
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

(The rest of the relevant log entries follows below, but first the
analysis:)

As I pointed out in comment 16 bullet (2.3.2), modular virt-v2v does not
generate the "ovf:actual_size" attribute for the <Disk> element.

The error message itself seems to come from somewhere in
"backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfVmReader.java";
it is formatted in the importVm() method in
"backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/utils/ovf/OvfManager.java".

(analysis ends, rest of relevant log follows)

2021-12-03 22:18:42,561+08 DEBUG [org.ovirt.engine.core.utils.ovf.OvfManager]
  (default task-1) [0f1b2344-47e0-4f2d-9651-2e920510973a]
  Error parsing OVF

<?xml version='1.0' encoding='utf-8'?>
<ovf:Envelope
 xmlns:rasd='http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData'
 xmlns:vssd='http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData'
 xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
 xmlns:ovf='http://schemas.dmtf.org/ovf/envelope/1/'
 xmlns:ovirt='http://www.ovirt.org/ovf' ovf:version='0.9'
 >
  <!-- generated by helper-v2v-output 1.45.91rhel=9,release=1.el9 -->
  <References>
    <File
     ovf:href='ce8f5ddb-71e8-44a3-bd4c-37bed44bf767/e80a333c-cf4b-40f9-bc70-7a63778f26b1'
     ovf:id='e80a333c-cf4b-40f9-bc70-7a63778f26b1'
     ovf:description='generated by helper-v2v-output 1.45.91rhel=9,release=1.el9'
     />
  </References>
  <Section xsi:type='ovf:NetworkSection_Type'>
    <Info>List of networks</Info>
    <Network ovf:name='ovirtmgmt'/>
  </Section>
  <Section xsi:type='ovf:DiskSection_Type'>
    <Info>List of Virtual Disks</Info>
    <Disk ovf:diskId='e80a333c-cf4b-40f9-bc70-7a63778f26b1'
     ovf:size='12' ovf:capacity='12884901888'
     ovf:fileRef='ce8f5ddb-71e8-44a3-bd4c-37bed44bf767/e80a333c-cf4b-40f9-bc70-7a63778f26b1'
     ovf:parentRef=''
     ovf:vm_snapshot_id='95e6c366-f24e-49e7-a19a-90cd6c29a735'
     ovf:volume-format='RAW' ovf:volume-type='Sparse'
     ovf:format='http://en.wikipedia.org/wiki/Byte'
     ovf:disk-interface='VirtIO' ovf:disk-type='System'
     ovf:boot='True'
     />
  </Section>
  <Content ovf:id='out' xsi:type='ovf:VirtualSystem_Type'>
    <Name>esx6.7-rhel8.4-x86_64</Name>
    <TemplateId>00000000-0000-0000-0000-000000000000</TemplateId>
    <TemplateName>Blank</TemplateName>
    <Description>generated by helper-v2v-output 1.45.91rhel=9,release=1.el9</Description>
    <Domain/>
    <CreationDate>2021/11/30 05:56:11</CreationDate>
    <IsInitilized>True</IsInitilized>
    <IsAutoSuspend>False</IsAutoSuspend>
    <TimeZone/>
    <IsStateless>False</IsStateless>
    <VmType>1</VmType>
    <DefaultDisplayType>1</DefaultDisplayType>
    <BiosType>1</BiosType>
    <Origin>1</Origin>
    <Section ovf:id='3ce2db8d-3d2a-4817-beea-1e856bfeafc3'
     ovf:required='false' xsi:type='ovf:OperatingSystemSection_Type'>
      <Info>Red Hat Enterprise Linux 8.4 (Ootpa)</Info>
      <Description>rhel_8x64</Description>
    </Section>
    <Section xsi:type='ovf:VirtualHardwareSection_Type'>
      <Info>1 CPU, 2048 Memory</Info>
      <Item>
        <rasd:Caption>1 virtual cpu</rasd:Caption>
        <rasd:Description>Number of virtual CPU</rasd:Description>
        <rasd:InstanceId>1</rasd:InstanceId>
        <rasd:ResourceType>3</rasd:ResourceType>
        <rasd:num_of_sockets>1</rasd:num_of_sockets>
        <rasd:cpu_per_socket>1</rasd:cpu_per_socket>
      </Item>
      <Item>
        <rasd:Caption>2048 MB of memory</rasd:Caption>
        <rasd:Description>Memory Size</rasd:Description>
        <rasd:InstanceId>2</rasd:InstanceId>
        <rasd:ResourceType>4</rasd:ResourceType>
        <rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits>
        <rasd:VirtualQuantity>2048</rasd:VirtualQuantity>
      </Item>
      <Item>
        <rasd:Caption>USB Controller</rasd:Caption>
        <rasd:InstanceId>3</rasd:InstanceId>
        <rasd:ResourceType>23</rasd:ResourceType>
        <rasd:UsbPolicy>Disabled</rasd:UsbPolicy>
      </Item>
      <Item>
        <rasd:Caption>Graphical Controller</rasd:Caption>
        <rasd:InstanceId>69b0e10c-7027-411f-a73f-f8085193cd13</rasd:InstanceId>
        <rasd:ResourceType>20</rasd:ResourceType>
        <Type>video</Type>
        <rasd:VirtualQuantity>1</rasd:VirtualQuantity>
        <Device>vga</Device>
      </Item>
      <Item>
        <rasd:Caption>RNG Device</rasd:Caption>
        <rasd:InstanceId>a4b58d91-c748-4c3d-bddc-7e0a81e89e8a</rasd:InstanceId>
        <rasd:ResourceType>0</rasd:ResourceType>
        <Type>rng</Type>
        <Device>virtio</Device>
        <SpecParams>
          <source>urandom</source>
        </SpecParams>
      </Item>
      <Item>
        <rasd:Caption>Memory Ballooning Device</rasd:Caption>
        <rasd:InstanceId>8d976781-a525-4a2b-a346-2997d71d7f98</rasd:InstanceId>
        <rasd:ResourceType>0</rasd:ResourceType>
        <Type>balloon</Type>
        <Device>memballoon</Device>
        <SpecParams>
          <model>virtio</model>
        </SpecParams>
      </Item>
      <Item>
        <rasd:Caption>Drive 1</rasd:Caption>
        <rasd:InstanceId>e80a333c-cf4b-40f9-bc70-7a63778f26b1</rasd:InstanceId>
        <rasd:ResourceType>17</rasd:ResourceType>
        <Type>disk</Type>
        <rasd:HostResource>ce8f5ddb-71e8-44a3-bd4c-37bed44bf767/e80a333c-cf4b-40f9-bc70-7a63778f26b1</rasd:HostResource>
        <rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent>
        <rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template>
        <rasd:ApplicationList/>
        <rasd:StorageId>e2409bea-6f48-45e5-804e-6fa3d0660631</rasd:StorageId>
        <rasd:StoragePoolId>00000000-0000-0000-0000-000000000000</rasd:StoragePoolId>
        <rasd:CreationDate>2021/11/30 05:56:11</rasd:CreationDate>
        <rasd:LastModified>2021/11/30 05:56:11</rasd:LastModified>
        <rasd:last_modified_date>2021/11/30 05:56:11</rasd:last_modified_date>
        <BootOrder>1</BootOrder>
      </Item>
      <Item>
        <rasd:InstanceId>db1374a6-c35d-4135-90ce-3330fd6e7ff7</rasd:InstanceId>
        <rasd:Caption>Ethernet adapter on ovirtmgmt</rasd:Caption>
        <rasd:ResourceType>10</rasd:ResourceType>
        <rasd:ResourceSubType>3</rasd:ResourceSubType>
        <Type>interface</Type>
        <rasd:Connection>ovirtmgmt</rasd:Connection>
        <rasd:Name>eth0</rasd:Name>
        <rasd:MACAddress>00:50:56:ac:d0:17</rasd:MACAddress>
      </Item>
    </Section>
  </Content>
</ovf:Envelope>

Comment 28 Laszlo Ersek 2021-12-04 11:23:07 UTC
(In reply to Laszlo Ersek from comment #27)
> 2021-12-03 22:18:42,561+08 ERROR [org.ovirt.engine.core.utils.ovf.OvfManager]
>   (default task-1) [0f1b2344-47e0-4f2d-9651-2e920510973a]
>   Error parsing OVF due to OVF error:
>   [Empty Name]: cannot read '//*/Section/Disk' with value: null
>   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> As I pointed out in comment 16 bullet (2.3.2), modular virt-v2v does not
> generate the "ovf:actual_size" attribute for the <Disk> element.
> 
> The error message itself seems to come from somewhere in
> "backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfVmReader.java";
> it is formatted in the importVm() method in
> "backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/utils/ovf/OvfManager.java".

Thankfully, the XPath expression in the error message is greppable. It comes from the buildDisk() method in "OvfOvirtReader.java". This method then calls readDisk() -- after locating the disk image referenced by the @ovf:diskId attribute in the "image stream --, and in readDisk(), we have:

        if (!StringUtils.isEmpty(node.attributes.get("ovf:actual_size").getValue())) {
            image.setActualSizeInBytes(
                    convertGigabyteToBytes(Long.parseLong(node.attributes.get("ovf:actual_size").getValue())));
        }

Note the call

  node.attributes.get("ovf:actual_size").getValue()

Other call sites of node.attributes.get() in the same file indicate that node.attributes.get() is capable of returning "null". For example, in buildFileReference(), we have:

            if (node.attributes.get("ovf:disk_storage_type") != null) {
                String diskStorageType = node.attributes.get("ovf:disk_storage_type").getValue();

and

                        if (node.attributes.get("ovf:cinder_volume_type") != null) {
                            String cinderVolumeType = node.attributes.get("ovf:cinder_volume_type").getValue();

implying that getValue() should only be called once we know that the attribute exists (IOW, get() does not return "null").

This nullity check is not performed on node.attributes.get("ovf:actual_size"); ovirt-engine assumes that the attribute always exists, only maybe with an empty string value.

So this is why we get (basically) a null pointer dereference, due to the absence of "ovf:actual_size" in the OVF. Effectively, "ovf:actual_size" is a mandatory attribute.

FWIW, ovirt-engine seems to be prepared to deal with an *empty* (but still existent) "ovf:actual_size" attribute, so perhaps we can use that as a workaround?

Comment 29 Richard W.M. Jones 2021-12-04 11:57:09 UTC
Nice bit of detective work there.  Can we fix this properly as discussed
in email? (except maybe we can try a hack like ovf:actual_size=1 just to verify
the diagnosis)

Comment 30 Laszlo Ersek 2021-12-06 10:59:00 UTC
(In reply to Richard W.M. Jones from comment #29)
> Nice bit of detective work there.  Can we fix this properly as discussed
> in email?

The OCaml code you sent me (thanks!) is not trivial :), I'd like to understand what ovirt-engine needs the "actual size" for, in the first place. From the ovirt-engine code, the intent seems to be to tolerate the absence of this *information*, except the implementation is incomplete (it only accepts an empty attribute, not a missing one, for "information absent").

> (except maybe we can try a hack like ovf:actual_size=1 just to verify the diagnosis)

Yes, that's what I'd like to do: I plan to hack the OVF file(s) in the ESD directly, and retry an import on the WebUI. Perhaps I'll see further error messages that way.

Comment 31 Laszlo Ersek 2021-12-06 13:19:02 UTC
Confirmed. After adding the following attribute:

<Disk
 ovf:diskId='e80a333c-cf4b-40f9-bc70-7a63778f26b1'
 ovf:size='12'
 ovf:capacity='12884901888'
 ovf:fileRef='ce8f5ddb-71e8-44a3-bd4c-37bed44bf767/e80a333c-cf4b-40f9-bc70-7a63778f26b1'
 ovf:parentRef=''
 ovf:vm_snapshot_id='95e6c366-f24e-49e7-a19a-90cd6c29a735'
 ovf:volume-format='RAW'
 ovf:volume-type='Sparse'
 ovf:format='http://en.wikipedia.org/wiki/Byte'
 ovf:disk-interface='VirtIO'
 ovf:disk-type='System'
 ovf:boot='True'
 ovf:actual_size=''      <--------- this one
 />

to "3ce2db8d-3d2a-4817-beea-1e856bfeafc3.ovf", and restarting
ovirt-engine, the domain ("esx6.7-rhel8.4-x86_64") appeared in the
"Storage | Domains | nfs_export | VM Import" view, in the WebUI.

Comment 32 Laszlo Ersek 2021-12-06 13:35:40 UTC
Of interest: the (missing) nullity check I referenced in comment 28 had actually existed until commit 1082d9dec289 ("core: undo recent generalization of ovf processing", 2017-08-09); that commit regressed it.

Furthermore, the parsing of "ovf:actual_size" goes back to almost-initial commit f669bc8004f0 ("Introducing oVirt Engine", 2011-10-27) in the ovirt-engine repository.

The OVF spec (DSP0243, v2.1.1) does not define this attribute. Given that the "ovf" namespace prefix is used to refer to the namespace 'http://schemas.dmtf.org/ovf/envelope/1/', it's really strange that ovirt-engine has (apparently) invented a new element in the standard namespace.

Comment 33 Richard W.M. Jones 2021-12-06 13:39:40 UTC
(In reply to Laszlo Ersek from comment #32)
> The OVF spec (DSP0243, v2.1.1) does not define this attribute. Given that
> the "ovf" namespace prefix is used to refer to the namespace
> 'http://schemas.dmtf.org/ovf/envelope/1/', it's really strange that
> ovirt-engine has (apparently) invented a new element in the standard
> namespace.

OVF is best thought of as a plot by VMware to pretend that their
software conforms to standards, rather than a standard in the normal sense.
In particular OVF/OVAs are not interoperable between hypervisors and
most hypervisors spin off in different directions from the standard
whenever convenient.  oVirt even has two different OVF flavours.

Comment 34 Laszlo Ersek 2021-12-08 13:27:46 UTC
[virt-v2v PATCH 0/3] lib/create_ovf: populate "actual size" attributes again
Message-Id: <20211208122050.7067-1-lersek>
https://listman.redhat.com/archives/libguestfs/2021-December/msg00096.html

Comment 35 Laszlo Ersek 2021-12-10 11:36:28 UTC
[virt-v2v PATCH v2 0/3] lib/create_ovf: populate "actual size" attributes again
Message-Id: <20211210113537.10907-1-lersek>
https://listman.redhat.com/archives/libguestfs/2021-December/msg00127.html

Comment 36 Laszlo Ersek 2021-12-10 13:30:29 UTC
(In reply to Laszlo Ersek from comment #35)
> [virt-v2v PATCH v2 0/3] lib/create_ovf: populate "actual size" attributes again
> Message-Id: <20211210113537.10907-1-lersek>
> https://listman.redhat.com/archives/libguestfs/2021-December/msg00127.html

Merged upstream as commit range 9bb0e7f1d229..a2a4f7a09996.

Comment 40 Vera 2022-01-07 09:22:12 UTC
Pre-Verified with the pkg version:
libvirt-libs-7.10.0-1.el9.x86_64
guestfs-tools-1.46.1-6.el9.x86_64
qemu-img-6.2.0-1.el9.x86_64
libguestfs-1.46.1-2.el9.x86_64
virt-v2v-1.45.95-3.el9.x86_64
nbdkit-1.28.4-1.el9.x86_64

rhv-4.4.8.3-0.10.el8ev

Steps:
1.Convert a guest from VMware to rhv4.4 via -o rhv by v2v

# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/root/vddk_libdir/latest -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -ip /v2v-ops/esxpw   -o rhv -os 10.73.224.195:/home/nfs_export -b ovirtmgmt  esx6.7-rhel8.4-x86_64
[   1.8] Opening the source
[   6.5] Inspecting the source
[  12.5] Checking for sufficient free disk space in the guest
[  12.5] Converting Red Hat Enterprise Linux 8.4 (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  48.1] Mapping filesystem data to avoid copying unused and blank areas
[  49.4] Closing the overlay
[  49.6] Assigning disks to buses
[  49.6] Checking if the guest needs BIOS or UEFI to boot
[  51.1] Copying disk 1/1
█ 100% [****************************************]
[ 317.4] Creating output metadata


The process hangs at the end "Creating output metadata".


Modify the status to Assigned.

Comment 41 Laszlo Ersek 2022-01-07 10:50:24 UTC
Hello Vera,

My expectation is that the code is actually correct, and nothing hangs indefinitely. Instead, what I expect happens is that the block usage calculation takes very long.

The fix for this RHBZ rests on the "NBD.block_status" API. The OCaml code calls "NBD.block_status" in a chunked manner, once per about 2GiB disk size. If you have a 10-20 GiB guest (which I assume from the guest name in comment 40 containing "rhel8.4"), then that's 5-10 calls.

In turn, if the source is VDDK, then the "NBD.block_status" API boils down to the QueryAllocatedBlocks VDDK API. That API is known to be catastrophically slow.

So here are my requests:

(1) Can you please re-run the same test with "-v -x" passed to virt-v2v, and attach the output to the BZ?

(
I remember that nbdkit *itself* can collect statistics, but I don't know if we can thread the necessary nbdkit parameters through the virt-v2v command line!

... Ah wait, in "input/nbdkit_vddk.ml", I see:

  (* Enable VDDK stats. *)
  Nbdkit.add_debug_flag cmd "vddk.stats" "1";

so, with "-v -x" passed to virt-v2v, nbdkit should automatically log the stats for us.
)


(2) Can you please run a similar test (using a similar guest disk image size), but import the guest from libvirt, not VMWare? This would test the same code (for this RHBZ), but against a different nbdkit back-end. It would remove the QueryAllocatedBlocks API from the equation.

Thanks!

Comment 42 Richard W.M. Jones 2022-01-07 11:11:15 UTC
(In reply to Laszlo Ersek from comment #41)
> ... Ah wait, in "input/nbdkit_vddk.ml", I see:
> 
>   (* Enable VDDK stats. *)
>   Nbdkit.add_debug_flag cmd "vddk.stats" "1";
> 
> so, with "-v -x" passed to virt-v2v, nbdkit should automatically log the
> stats for us.
> )

Yes, you should get VDDK per API stats whenever it's available if you have
new enough virt-v2v.  Note this feature requires nbdkit >= 1.26.5-1.el9.
Older versions of nbdkit will ignore the flag.  With the stats it should
be very obvious if QueryAllocatedBlocks is to blame or not.

Comment 46 Vera 2022-01-11 12:17:02 UTC
Hi, Laszlo

Please check attachment 1850072 [details] for the detail on he same test with "-v -x" passed to virt-v2v. The process still hangs after conversion.

Thanks.

Comment 47 Richard W.M. Jones 2022-01-11 12:29:44 UTC
Interesting, it hangs creating the output metadata, just after
connecting to the output pipeline presumably in order to get
extent information, ie. this code:

https://github.com/libguestfs/virt-v2v/blob/2e8a991c1916f40f60fce69ec3e6b63c5558d93a/lib/utils.ml#L180

but before it sends any request to nbdkit.

I have a suggestion here: If the virt-v2v -v (verbose) flag is set
then we should enable verbose debugging in libnbd too.  I will
send a couple of patches.

Comment 49 Vera 2022-01-11 13:12:19 UTC
And also tried to convert the guest not from VMware.

It also hangs.

# virt-v2v -ic qemu:///system -o rhv -os 10.73.224.195:/home/nfs_export -b ovirtmgmt -of raw --mac 52:54:00:5d:3c:2f:network:default esx6.7-rhel8.3-x86_64 -v -x|& tee > /2027598-fromlibvirt.log
█ 100% [****************************************]


Please check attachment 1850077 [details] for details.

Thanks.

Comment 50 Richard W.M. Jones 2022-01-11 13:51:32 UTC
Some patches which enhance the debugging:
https://listman.redhat.com/archives/libguestfs/2022-January/msg00073.html

> And also tried to convert the guest not from VMware.

Yes, this bug looks like it is only related to the output side (-o rhv).

Comment 51 Laszlo Ersek 2022-01-12 12:36:14 UTC
(In reply to Richard W.M. Jones from comment #47)
> Interesting, it hangs creating the output metadata, just after
> connecting to the output pipeline presumably in order to get
> extent information, ie. this code:
> 
> https://github.com/libguestfs/virt-v2v/blob/
> 2e8a991c1916f40f60fce69ec3e6b63c5558d93a/lib/utils.ml#L180
> 
> but before it sends any request to nbdkit.
> 
> I have a suggestion here: If the virt-v2v -v (verbose) flag is set
> then we should enable verbose debugging in libnbd too.  I will
> send a couple of patches.

https://listman.redhat.com/archives/libguestfs/2022-January/msg00073.html

Comment 53 Laszlo Ersek 2022-01-12 13:11:38 UTC
(In reply to Richard W.M. Jones from comment #47)
> Interesting, it hangs creating the output metadata, just after
> connecting to the output pipeline presumably in order to get
> extent information, ie. this code:
> 
> https://github.com/libguestfs/virt-v2v/blob/
> 2e8a991c1916f40f60fce69ec3e6b63c5558d93a/lib/utils.ml#L180
> 
> but before it sends any request to nbdkit.

In both comments 46 and 49, the virt-v2v log shows the nbdkit server command line, for the output disk, as follows:

LANG=C 'nbdkit' '--exit-with-parent' '--foreground' '--pidfile' '/tmp/v2vnbdkit.Nyb65K/nbdkit2.pid' '--unix' '/tmp/v2v.kqkM81/out0' '--threads' '16' '--selinux-label' 'system_u:object_r:svirt_socket_t:s0' '-D' 'nbdkit.backend.datapath=0' '--verbose' 'file' 'file=/tmp/v2v.OEnPAm/c33f36c4-e98b-46af-bf26-4c152bd6a3f1/images/b9e04aa1-3fad-48ba-a731-25131b10d1da/c049a830-8811-41c6-bee1-86d01b87999c' 'cache=none'

LANG=C 'nbdkit' '--exit-with-parent' '--foreground' '--pidfile' '/tmp/v2vnbdkit.gb7rkH/nbdkit2.pid' '--unix' '/tmp/v2v.Qdge9a/out0' '--threads' '16' '--selinux-label' 'system_u:object_r:svirt_socket_t:s0' '-D' 'nbdkit.backend.datapath=0' '--verbose' 'file' 'file=/tmp/v2v.8nEYT8/c33f36c4-e98b-46af-bf26-4c152bd6a3f1/images/e4e46518-1aa9-463c-9b24-edf8ef259d73/acfad9fb-5c63-4f21-b420-088d215039b8' 'cache=none'

Where do the server-side log messages (of nbdkit) go?

Comment 54 Laszlo Ersek 2022-01-12 14:35:14 UTC
(In reply to Laszlo Ersek from comment #51)
> (In reply to Richard W.M. Jones from comment #47)
> > Interesting, it hangs creating the output metadata, just after
> > connecting to the output pipeline presumably in order to get
> > extent information, ie. this code:
> > 
> > https://github.com/libguestfs/virt-v2v/blob/2e8a991c1916f40f60fce69ec3e6b63c5558d93a/lib/utils.ml#L180
> > 
> > but before it sends any request to nbdkit.
> > 
> > I have a suggestion here: If the virt-v2v -v (verbose) flag is set
> > then we should enable verbose debugging in libnbd too.  I will
> > send a couple of patches.
> 
> https://listman.redhat.com/archives/libguestfs/2022-January/msg00073.html

Merged upstream as commit range 2e8a991c1916..4578887821d8.

Comment 57 Laszlo Ersek 2022-01-13 07:34:18 UTC
(In reply to Laszlo Ersek from comment #41)

> The fix for this RHBZ rests on the "NBD.block_status" API. The OCaml
> code calls "NBD.block_status" in a chunked manner, once per about 2GiB
> disk size. If you have a 10-20 GiB guest (which I assume from the
> guest name in comment 40 containing "rhel8.4"), then that's 5-10
> calls.
>
> In turn, if the source is VDDK, then the "NBD.block_status" API boils
> down to the QueryAllocatedBlocks VDDK API. That API is known to be
> catastrophically slow.

(In reply to Richard W.M. Jones from comment #50)
> Yes, this bug looks like it is only related to the output side (-o
> rhv).

Let me apologize for this: I managed to totally confuse myself. The
"Utils.get_disk_allocated" function, from commit 27c056cdc6aa, calls
NBD.block_status on an *output* disk. Whether the input is VDDK or not
is 100% inconsequential. So QueryAllocatedBlocks is a red herring here;
it was incorrect from me to mention it.

Comment 58 Laszlo Ersek 2022-01-13 09:01:33 UTC
So... nbdkit is operating on an NFS-exported 12GB file in this case. A glitch (or slowness) with NFS would be consistent with the symptom.

Comment 61 Laszlo Ersek 2022-01-13 15:23:53 UTC
(In reply to Vera from comment #40)
> Pre-Verified with the pkg version:
> libvirt-libs-7.10.0-1.el9.x86_64
> guestfs-tools-1.46.1-6.el9.x86_64
> qemu-img-6.2.0-1.el9.x86_64
> libguestfs-1.46.1-2.el9.x86_64
> virt-v2v-1.45.95-3.el9.x86_64
> nbdkit-1.28.4-1.el9.x86_64
> 
> rhv-4.4.8.3-0.10.el8ev
> 
> Steps:
> 1.Convert a guest from VMware to rhv4.4 via -o rhv by v2v
> 
> # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it
> vddk -io vddk-libdir=/root/vddk_libdir/latest -io
> vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA 
> -ip /v2v-ops/esxpw   -o rhv -os 10.73.224.195:/home/nfs_export -b ovirtmgmt 
> esx6.7-rhel8.4-x86_64
> [   1.8] Opening the source
> [   6.5] Inspecting the source
> [  12.5] Checking for sufficient free disk space in the guest
> [  12.5] Converting Red Hat Enterprise Linux 8.4 (Ootpa) to run on KVM
> virt-v2v: This guest has virtio drivers installed.
> [  48.1] Mapping filesystem data to avoid copying unused and blank areas
> [  49.4] Closing the overlay
> [  49.6] Assigning disks to buses
> [  49.6] Checking if the guest needs BIOS or UEFI to boot
> [  51.1] Copying disk 1/1
> █ 100% [****************************************]
> [ 317.4] Creating output metadata
> 
> 
> The process hangs at the end "Creating output metadata".

The process is not hung. I've not yet built a virt-v2v binary with
Rich's debug patches yet, but I can already describe some behavior.

I've attached gdb to the virt-v2v process in this apparently hung state,
and what I see happening is the chunked block status fetching, from
commit 27c056cdc6aa, in get_disk_allocated, not advancing. I have not
actually tracked the OCaml-level function "get_disk_allocated", but the
underlying libnbd API:

nbd_block_status is repeatedly entered with count=2147483648 (2GiB --
the chunk size of "get_disk_allocated") and offset=701497344
(0x29D00000). There is no progress. The NBD server returns 2186 uint32_t
entries in the "base:allocation" metacontext, and those look like this
(please excuse the huge dump below):

0x56384ecb8d94: 0x00004006      0x03000000      0x00600000      0x00000000
0x56384ecb8da4: 0x00400000      0x03000000      0x00100000      0x00000000
0x56384ecb8db4: 0x00400000      0x03000000      0x00c00000      0x00000000
0x56384ecb8dc4: 0x0050fe0f      0x03000000      0x00300000      0x00000000
0x56384ecb8dd4: 0x00d00f00      0x03000000      0x00100000      0x00000000
0x56384ecb8de4: 0x00f0ff4c      0x03000000      0x00600000      0x00000000
0x56384ecb8df4: 0x00400000      0x03000000      0x00203e01      0x00000000
0x56384ecb8e04: 0x00300000      0x03000000      0x00b0cf04      0x00000000
0x56384ecb8e14: 0x00100000      0x03000000      0x00700300      0x00000000
0x56384ecb8e24: 0x00100000      0x03000000      0x00c01200      0x00000000
0x56384ecb8e34: 0x00f00000      0x03000000      0x00505000      0x00000000
0x56384ecb8e44: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb8e54: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb8e64: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb8e74: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb8e84: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb8e94: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb8ea4: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb8eb4: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb8ec4: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb8ed4: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb8ee4: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb8ef4: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb8f04: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb8f14: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb8f24: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb8f34: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb8f44: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb8f54: 0x00200000      0x03000000      0x00900000      0x00000000
0x56384ecb8f64: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb8f74: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb8f84: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb8f94: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb8fa4: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb8fb4: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb8fc4: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb8fd4: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb8fe4: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb8ff4: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb9004: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb9014: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb9024: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb9034: 0x00200000      0x03000000      0x00300000      0x00000000
0x56384ecb9044: 0x00100000      0x03000000      0x00500000      0x00000000
0x56384ecb9054: 0x00100000      0x03000000      0x00400000      0x00000000
0x56384ecb9064: 0x00300000      0x03000000      0x00300000      0x00000000
0x56384ecb9074: 0x00a00000      0x03000000      0x00900300      0x00000000
0x56384ecb9084: 0x00300000      0x03000000      0x00502c00      0x00000000
0x56384ecb9094: 0x00100000      0x03000000      0x00100000      0x00000000
0x56384ecb90a4: 0x00200000      0x03000000      0x00400800      0x00000000
0x56384ecb90b4: 0x00200000      0x03000000      0x00100000      0x00000000
0x56384ecb90c4: 0x00500000      0x03000000      0x00100000      0x00000000
0x56384ecb90d4: 0x00400000      0x03000000      0x00600000      0x00000000
0x56384ecb90e4: 0x00100000      0x03000000      0x00700d00      0x00000000
0x56384ecb90f4: 0x00100000      0x03000000      0x00200200      0x00000000
0x56384ecb9104: 0x00200000      0x03000000      0x00001500      0x00000000
0x56384ecb9114: 0x00900c00      0x03000000      0x00400000      0x00000000
0x56384ecb9124: 0x00c00100      0x03000000      0x00600000      0x00000000
0x56384ecb9134: 0x00100000      0x03000000      0x00100000      0x00000000
0x56384ecb9144: 0x00200000      0x03000000      0x00d00100      0x00000000
0x56384ecb9154: 0x00700000      0x03000000      0x00600300      0x00000000
0x56384ecb9164: 0x00100000      0x03000000      0x00e00400      0x00000000
0x56384ecb9174: 0x00300000      0x03000000      0x00e00000      0x00000000
0x56384ecb9184: 0x00100000      0x03000000      0x00800000      0x00000000
0x56384ecb9194: 0x00100000      0x03000000      0x00d00000      0x00000000
0x56384ecb91a4: 0x00200000      0x03000000      0x00500100      0x00000000
0x56384ecb91b4: 0x00200000      0x03000000      0x00500100      0x00000000
0x56384ecb91c4: 0x00200000      0x03000000      0x00600100      0x00000000
0x56384ecb91d4: 0x00200000      0x03000000      0x00500100      0x00000000
0x56384ecb91e4: 0x00200000      0x03000000      0x00500100      0x00000000
0x56384ecb91f4: 0x00200000      0x03000000      0x00500100      0x00000000
0x56384ecb9204: 0x00200000      0x03000000      0x00500100      0x00000000
0x56384ecb9214: 0x00200000      0x03000000      0x00600100      0x00000000
0x56384ecb9224: 0x00200000      0x03000000      0x00600100      0x00000000
0x56384ecb9234: 0x00200000      0x03000000      0x00700100      0x00000000
0x56384ecb9244: 0x00200000      0x03000000      0x00800100      0x00000000
0x56384ecb9254: 0x00200000      0x03000000      0x00800100      0x00000000
0x56384ecb9264: 0x00200000      0x03000000      0x00800100      0x00000000
0x56384ecb9274: 0x00200000      0x03000000      0x00f01100      0x00000000
0x56384ecb9284: 0x00000100      0x03000000      0x00500e00      0x00000000
0x56384ecb9294: 0x00300100      0x03000000      0x00500e00      0x00000000
0x56384ecb92a4: 0x00300100      0x03000000      0x00900d00      0x00000000
0x56384ecb92b4: 0x00000100      0x03000000      0x00100f00      0x00000000
0x56384ecb92c4: 0x00300100      0x03000000      0x00100f00      0x00000000
0x56384ecb92d4: 0x00300100      0x03000000      0x00a00d00      0x00000000
0x56384ecb92e4: 0x00a00000      0x03000000      0x00200f00      0x00000000
0x56384ecb92f4: 0x00c00000      0x03000000      0x00300f00      0x00000000
0x56384ecb9304: 0x00c00000      0x03000000      0x00c00d00      0x00000000
0x56384ecb9314: 0x00900000      0x03000000      0x00400f00      0x00000000
0x56384ecb9324: 0x00c00000      0x03000000      0x00100000      0x00000000
0x56384ecb9334: 0x00100000      0x03000000      0x00400f00      0x00000000
0x56384ecb9344: 0x00c00000      0x03000000      0x00100000      0x00000000
0x56384ecb9354: 0x00100000      0x03000000      0x00900200      0x00000000
0x56384ecb9364: 0x00100000      0x03000000      0x00300000      0x00000000
0x56384ecb9374: 0x00200000      0x03000000      0x00600200      0x00000000
0x56384ecb9384: 0x00200000      0x03000000      0x00200000      0x00000000
0x56384ecb9394: 0x00300000      0x03000000      0x00300200      0x00000000
0x56384ecb93a4: 0x00100000      0x03000000      0x00300000      0x00000000
0x56384ecb93b4: 0x00500000      0x03000000      0x00800200      0x00000000
0x56384ecb93c4: 0x00300000      0x03000000      0x00000200      0x00000000
0x56384ecb93d4: 0x00300000      0x03000000      0x00200900      0x00000000
0x56384ecb93e4: 0x00200000      0x03000000      0x00200000      0x00000000
0x56384ecb93f4: 0x00300000      0x03000000      0x00300200      0x00000000
0x56384ecb9404: 0x00100000      0x03000000      0x00300000      0x00000000
0x56384ecb9414: 0x00500000      0x03000000      0x00400500      0x00000000
0x56384ecb9424: 0x00900000      0x03000000      0x00f00000      0x00000000
0x56384ecb9434: 0x00000200      0x03000000      0x00200400      0x00000000
0x56384ecb9444: 0x00e00000      0x03000000      0x00d00600      0x00000000
0x56384ecb9454: 0x00900000      0x03000000      0x00c00000      0x00000000
0x56384ecb9464: 0x00300000      0x03000000      0x00600000      0x00000000
0x56384ecb9474: 0x00300000      0x03000000      0x00000100      0x00000000
0x56384ecb9484: 0x00900000      0x03000000      0x00a00000      0x00000000
0x56384ecb9494: 0x00700000      0x03000000      0x00800000      0x00000000
0x56384ecb94a4: 0x00100000      0x03000000      0x00300000      0x00000000
0x56384ecb94b4: 0x00200000      0x03000000      0x00600200      0x00000000
0x56384ecb94c4: 0x00200000      0x03000000      0x00200000      0x00000000
0x56384ecb94d4: 0x00300000      0x03000000      0x00300200      0x00000000
0x56384ecb94e4: 0x00100000      0x03000000      0x00300000      0x00000000
0x56384ecb94f4: 0x00500000      0x03000000      0x00400500      0x00000000
0x56384ecb9504: 0x00900000      0x03000000      0x00e00200      0x00000000
0x56384ecb9514: 0x00900000      0x03000000      0x00200100      0x00000000
0x56384ecb9524: 0x00000200      0x03000000      0x00d00300      0x00000000
0x56384ecb9534: 0x00000200      0x03000000      0x00000400      0x00000000
0x56384ecb9544: 0x00e00000      0x03000000      0x00d00600      0x00000000
0x56384ecb9554: 0x00900000      0x03000000      0x00c00000      0x00000000
0x56384ecb9564: 0x00300000      0x03000000      0x00600000      0x00000000
0x56384ecb9574: 0x00300000      0x03000000      0x00000100      0x00000000
0x56384ecb9584: 0x00900000      0x03000000      0x00a00000      0x00000000
0x56384ecb9594: 0x00700000      0x03000000      0x00f00200      0x00000000
0x56384ecb95a4: 0x00800000      0x03000000      0x00c00000      0x00000000
0x56384ecb95b4: 0x00200000      0x03000000      0x00700000      0x00000000
0x56384ecb95c4: 0x00200000      0x03000000      0x00000100      0x00000000
0x56384ecb95d4: 0x00800000      0x03000000      0x00a00000      0x00000000
0x56384ecb95e4: 0x00800000      0x03000000      0x00b00000      0x00000000
0x56384ecb95f4: 0x00100000      0x03000000      0x00300000      0x00000000
0x56384ecb9604: 0x00200000      0x03000000      0x00600200      0x00000000
0x56384ecb9614: 0x00200000      0x03000000      0x00200000      0x00000000
0x56384ecb9624: 0x00300000      0x03000000      0x00300200      0x00000000
0x56384ecb9634: 0x00100000      0x03000000      0x00300000      0x00000000
0x56384ecb9644: 0x00500000      0x03000000      0x00900200      0x00000000
0x56384ecb9654: 0x00300000      0x03000000      0x00000200      0x00000000
0x56384ecb9664: 0x00300000      0x03000000      0x0004d000      0x00000000
0x56384ecb9674: 0x00009000      0x00000003      0x0002d000      0x00000000
0x56384ecb9684: 0x00009000      0x00000003      0x00014000      0x00000000
0x56384ecb9694: 0x00001000      0x00000003      0x00003000      0x00000000
0x56384ecb96a4: 0x00002000      0x00000003      0x00025000      0x00000000
0x56384ecb96b4: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecb96c4: 0x00003000      0x00000003      0x00023000      0x00000000
0x56384ecb96d4: 0x00001000      0x00000003      0x00003000      0x00000000
0x56384ecb96e4: 0x00005000      0x00000003      0x00027000      0x00000000
0x56384ecb96f4: 0x00002000      0x00000003      0x0001a000      0x00000000
0x56384ecb9704: 0x00003000      0x00000003      0x0001d000      0x00000000
0x56384ecb9714: 0x00003000      0x00000003      0x0001d000      0x00000000
0x56384ecb9724: 0x00003000      0x00000003      0x00076000      0x00000000
0x56384ecb9734: 0x00020000      0x00000003      0x00047000      0x00000000
0x56384ecb9744: 0x0000d000      0x00000003      0x0006d000      0x00000000
0x56384ecb9754: 0x00009000      0x00000003      0x0000f000      0x00000000
0x56384ecb9764: 0x00004000      0x00000003      0x00008000      0x00000000
0x56384ecb9774: 0x00004000      0x00000003      0x00010000      0x00000000
0x56384ecb9784: 0x00009000      0x00000003      0x00009000      0x00000000
0x56384ecb9794: 0x00008000      0x00000003      0x00031000      0x00000000
0x56384ecb97a4: 0x00008000      0x00000003      0x00010000      0x00000000
0x56384ecb97b4: 0x00003000      0x00000003      0x00007000      0x00000000
0x56384ecb97c4: 0x00003000      0x00000003      0x00010000      0x00000000
0x56384ecb97d4: 0x00008000      0x00000003      0x0000a000      0x00000000
0x56384ecb97e4: 0x00007000      0x00000003      0x00046000      0x00000000
0x56384ecb97f4: 0x00004000      0x00000003      0x00009000      0x00000000
0x56384ecb9804: 0x00004000      0x00000003      0x00014000      0x00000000
0x56384ecb9814: 0x00009000      0x00000003      0x0000b000      0x00000000
0x56384ecb9824: 0x00007000      0x00000003      0x00008000      0x00000000
0x56384ecb9834: 0x00002000      0x00000003      0x00235000      0x00000000
0x56384ecb9844: 0x00001000      0x00000003      0x0011a000      0x00000000
0x56384ecb9854: 0x0000c000      0x00000003      0x00002000      0x00000000
0x56384ecb9864: 0x00001000      0x00000003      0x00001000      0x00000000
0x56384ecb9874: 0x00001000      0x00000003      0x00064000      0x00000000
0x56384ecb9884: 0x00002000      0x00000003      0x0001a000      0x00000000
0x56384ecb9894: 0x00002000      0x00000003      0x0003e000      0x00000000
0x56384ecb98a4: 0x0002c000      0x00000003      0x00005000      0x00000000
0x56384ecb98b4: 0x00002000      0x00000003      0x00060000      0x00000000
0x56384ecb98c4: 0x00001000      0x00000003      0x0000d000      0x00000000
0x56384ecb98d4: 0x00048000      0x00000003      0x00001000      0x00000000
0x56384ecb98e4: 0x00006000      0x00000003      0x0000f000      0x00000000
0x56384ecb98f4: 0x00013000      0x00000003      0x00878000      0x00000000
0x56384ecb9904: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecb9914: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecb9924: 0x00005000      0x00000003      0x00003000      0x00000000
0x56384ecb9934: 0x000ff000      0x00000003      0x00019000      0x00000000
0x56384ecb9944: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9954: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9964: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9974: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9984: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9994: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb99a4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb99b4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb99c4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb99d4: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecb99e4: 0x00002000      0x00000003      0x00009000      0x00000000
0x56384ecb99f4: 0x0000f000      0x00000003      0x001a0000      0x00000000
0x56384ecb9a04: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecb9a14: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecb9a24: 0x00005000      0x00000003      0x00003000      0x00000000
0x56384ecb9a34: 0x000ff000      0x00000003      0x00019000      0x00000000
0x56384ecb9a44: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9a54: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9a64: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9a74: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9a84: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9a94: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9aa4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9ab4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9ac4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9ad4: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecb9ae4: 0x00002000      0x00000003      0x00009000      0x00000000
0x56384ecb9af4: 0x0000f000      0x00000003      0x001a0000      0x00000000
0x56384ecb9b04: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecb9b14: 0x00005000      0x00000003      0x00003000      0x00000000
0x56384ecb9b24: 0x000ff000      0x00000003      0x00019000      0x00000000
0x56384ecb9b34: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9b44: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9b54: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9b64: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9b74: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9b84: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9b94: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9ba4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9bb4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9bc4: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecb9bd4: 0x00001000      0x00000003      0x00009000      0x00000000
0x56384ecb9be4: 0x00010000      0x00000003      0x0019e000      0x00000000
0x56384ecb9bf4: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecb9c04: 0x00005000      0x00000003      0x00004000      0x00000000
0x56384ecb9c14: 0x000ff000      0x00000003      0x00018000      0x00000000
0x56384ecb9c24: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9c34: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9c44: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9c54: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9c64: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9c74: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9c84: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9c94: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9ca4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9cb4: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecb9cc4: 0x00001000      0x00000003      0x00009000      0x00000000
0x56384ecb9cd4: 0x00010000      0x00000003      0x001bb000      0x00000000
0x56384ecb9ce4: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecb9cf4: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecb9d04: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecb9d14: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecb9d24: 0x00005000      0x00000003      0x00006000      0x00000000
0x56384ecb9d34: 0x000ff000      0x00000003      0x00018000      0x00000000
0x56384ecb9d44: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9d54: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9d64: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9d74: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9d84: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9d94: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9da4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9db4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9dc4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9dd4: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecb9de4: 0x00001000      0x00000003      0x0000a000      0x00000000
0x56384ecb9df4: 0x0000f000      0x00000003      0x001a1000      0x00000000
0x56384ecb9e04: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecb9e14: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecb9e24: 0x00005000      0x00000003      0x00005000      0x00000000
0x56384ecb9e34: 0x000ff000      0x00000003      0x00017000      0x00000000
0x56384ecb9e44: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9e54: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9e64: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9e74: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9e84: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9e94: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9ea4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9eb4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9ec4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9ed4: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecb9ee4: 0x00002000      0x00000003      0x00009000      0x00000000
0x56384ecb9ef4: 0x0000f000      0x00000003      0x001a0000      0x00000000
0x56384ecb9f04: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecb9f14: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecb9f24: 0x00005000      0x00000003      0x00005000      0x00000000
0x56384ecb9f34: 0x000ff000      0x00000003      0x00017000      0x00000000
0x56384ecb9f44: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9f54: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9f64: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9f74: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9f84: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9f94: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9fa4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9fb4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecb9fc4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecb9fd4: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecb9fe4: 0x00002000      0x00000003      0x00009000      0x00000000
0x56384ecb9ff4: 0x0000f000      0x00000003      0x001a0000      0x00000000
0x56384ecba004: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecba014: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba024: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba034: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba044: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba054: 0x00005000      0x00000003      0x00007000      0x00000000
0x56384ecba064: 0x000ff000      0x00000003      0x00017000      0x00000000
0x56384ecba074: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecba084: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecba094: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecba0a4: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecba0b4: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecba0c4: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecba0d4: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecba0e4: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecba0f4: 0x00002000      0x00000003      0x00002000      0x00000000
0x56384ecba104: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba114: 0x00002000      0x00000003      0x00009000      0x00000000
0x56384ecba124: 0x00010000      0x00000003      0x001a1000      0x00000000
0x56384ecba134: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecba144: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba154: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba164: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba174: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba184: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba194: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba1a4: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba1b4: 0x00005000      0x00000003      0x00009000      0x00000000
0x56384ecba1c4: 0x000ff000      0x00000003      0x00018000      0x00000000
0x56384ecba1d4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba1e4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba1f4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba204: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba214: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba224: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba234: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba244: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba254: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba264: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecba274: 0x00001000      0x00000003      0x0000a000      0x00000000
0x56384ecba284: 0x0000f000      0x00000003      0x001a3000      0x00000000
0x56384ecba294: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecba2a4: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba2b4: 0x00005000      0x00000003      0x00005000      0x00000000
0x56384ecba2c4: 0x000ff000      0x00000003      0x00017000      0x00000000
0x56384ecba2d4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba2e4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba2f4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba304: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba314: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba324: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba334: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba344: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba354: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba364: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecba374: 0x00002000      0x00000003      0x00009000      0x00000000
0x56384ecba384: 0x0000f000      0x00000003      0x001a0000      0x00000000
0x56384ecba394: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecba3a4: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba3b4: 0x00005000      0x00000003      0x00005000      0x00000000
0x56384ecba3c4: 0x000ff000      0x00000003      0x00017000      0x00000000
0x56384ecba3d4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba3e4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba3f4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba404: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba414: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba424: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba434: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba444: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba454: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba464: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecba474: 0x00002000      0x00000003      0x00009000      0x00000000
0x56384ecba484: 0x0000f000      0x00000003      0x001a0000      0x00000000
0x56384ecba494: 0x00009000      0x00000003      0x00001000      0x00000000
0x56384ecba4a4: 0x00007000      0x00000003      0x00001000      0x00000000
0x56384ecba4b4: 0x00005000      0x00000003      0x00005000      0x00000000
0x56384ecba4c4: 0x000ff000      0x00000003      0x00017000      0x00000000
0x56384ecba4d4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba4e4: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba4f4: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba504: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba514: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba524: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba534: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba544: 0x00003000      0x00000003      0x00002000      0x00000000
0x56384ecba554: 0x00002000      0x00000003      0x00001000      0x00000000
0x56384ecba564: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecba574: 0x00002000      0x00000003      0x00009000      0x00000000
0x56384ecba584: 0x0000f000      0x00000003      0x00194000      0x00000000
0x56384ecba594: 0x00004000      0x00000003      0x00007000      0x00000000
0x56384ecba5a4: 0x00014000      0x00000003      0x00007000      0x00000000
0x56384ecba5b4: 0x00012000      0x00000003      0x0000b000      0x00000000
0x56384ecba5c4: 0x00005000      0x00000003      0x00002000      0x00000000
0x56384ecba5d4: 0x00001000      0x00000003      0x00021000      0x00000000
0x56384ecba5e4: 0x00001000      0x00000003      0x00020000      0x00000000
0x56384ecba5f4: 0x00001000      0x00000003      0x00029000      0x00000000
0x56384ecba604: 0x00003000      0x00000003      0x001db000      0x00000000
0x56384ecba614: 0x0003f000      0x00000003      0x000e7000      0x00000000
0x56384ecba624: 0x00001000      0x00000003      0x0067a000      0x00000000
0x56384ecba634: 0x00001000      0x00000003      0x0006e000      0x00000000
0x56384ecba644: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecba654: 0x00005000      0x00000003      0x00009000      0x00000000
0x56384ecba664: 0x00018000      0x00000003      0x0029a000      0x00000000
0x56384ecba674: 0x00001000      0x00000003      0x00003000      0x00000000
0x56384ecba684: 0x00005000      0x00000003      0x00009000      0x00000000
0x56384ecba694: 0x00010000      0x00000003      0x00001000      0x00000000
0x56384ecba6a4: 0x0000b000      0x00000003      0x00006000      0x00000000
0x56384ecba6b4: 0x00004000      0x00000003      0x00003000      0x00000000
0x56384ecba6c4: 0x0000e000      0x00000003      0x0000d000      0x00000000
0x56384ecba6d4: 0x00008000      0x00000003      0x001bb000      0x00000000
0x56384ecba6e4: 0x00001000      0x00000003      0x00174000      0x00000000
0x56384ecba6f4: 0x00007000      0x00000003      0x00109000      0x00000000
0x56384ecba704: 0x0000c000      0x00000003      0x00001000      0x00000000
0x56384ecba714: 0x00012000      0x00000003      0x00003000      0x00000000
0x56384ecba724: 0x00003000      0x00000003      0x00001000      0x00000000
0x56384ecba734: 0x00003000      0x00000003      0x002a4000      0x00000000
0x56384ecba744: 0x00001000      0x00000003      0x00038000      0x00000000
0x56384ecba754: 0x00007000      0x00000003      0x00069000      0x00000000
0x56384ecba764: 0x00001000      0x00000003      0x0003f000      0x00000000
0x56384ecba774: 0x00001000      0x00000003      0x00015000      0x00000000
0x56384ecba784: 0x00001000      0x00000003      0x00002000      0x00000000
0x56384ecba794: 0x00001000      0x00000003      0x00010000      0x00000000
0x56384ecba7a4: 0x00001000      0x00000003      0x0001d000      0x00000000
0x56384ecba7b4: 0x00001000      0x00000003      0x0001b000      0x00000000
0x56384ecba7c4: 0x00006000      0x00000003      0x0000d000      0x00000000
0x56384ecba7d4: 0x00001000      0x00000003      0x00012000      0x00000000
0x56384ecba7e4: 0x00001000      0x00000003      0x00012000      0x00000000
0x56384ecba7f4: 0x00001000      0x00000003      0x00012000      0x00000000
0x56384ecba804: 0x00001000      0x00000003      0x00015000      0x00000000
0x56384ecba814: 0x00001000      0x00000003      0x00002000      0x00000000
0x56384ecba824: 0x00001000      0x00000003      0x00014000      0x00000000
0x56384ecba834: 0x00001000      0x00000003      0x00015000      0x00000000
0x56384ecba844: 0x00001000      0x00000003      0x0010e000      0x00000000
0x56384ecba854: 0x00001000      0x00000003      0x00015000      0x00000000
0x56384ecba864: 0x00001000      0x00000003      0x00067000      0x00000000
0x56384ecba874: 0x00001000      0x00000003      0x00015000      0x00000000
0x56384ecba884: 0x00001000      0x00000003      0x00023000      0x00000000
0x56384ecba894: 0x00001000      0x00000003      0x00019000      0x00000000
0x56384ecba8a4: 0x00007000      0x00000003      0x00016000      0x00000000
0x56384ecba8b4: 0x00001000      0x00000003      0x0001c000      0x00000000
0x56384ecba8c4: 0x00007000      0x00000003      0x0003d000      0x00000000
0x56384ecba8d4: 0x00001000      0x00000003      0x00011000      0x00000000
0x56384ecba8e4: 0x00001000      0x00000003      0x00004000      0x00000000
0x56384ecba8f4: 0x00001000      0x00000003      0x00020000      0x00000000
0x56384ecba904: 0x00002000      0x00000003      0x0000f000      0x00000000
0x56384ecba914: 0x00001000      0x00000003      0x00001000      0x00000000
0x56384ecba924: 0x00001000      0x00000003      0x0001b000      0x00000000
0x56384ecba934: 0x00005000      0x00000003      0x00017000      0x00000000
0x56384ecba944: 0x00001000      0x00000003      0x0001e000      0x00000000
0x56384ecba954: 0x00005000      0x00000003      0x0003d000      0x00000000
0x56384ecba964: 0x00001000      0x00000003      0x00011000      0x00000000
0x56384ecba974: 0x00001000      0x00000003      0x00004000      0x00000000
0x56384ecba984: 0x00001000      0x00000003      0x0004f000      0x00000000
0x56384ecba994: 0x00001000      0x00000003      0x00011000      0x00000000
0x56384ecba9a4: 0x00001000      0x00000003      0x00004000      0x00000000
0x56384ecba9b4: 0x00001000      0x00000003      0x00054000      0x00000000
0x56384ecba9c4: 0x00001000      0x00000003      0x00011000      0x00000000
0x56384ecba9d4: 0x00001000      0x00000003      0x00004000      0x00000000
0x56384ecba9e4: 0x00001000      0x00000003      0x0004f000      0x00000000
0x56384ecba9f4: 0x00001000      0x00000003      0x00011000      0x00000000
0x56384ecbaa04: 0x00001000      0x00000003      0x00004000      0x00000000
0x56384ecbaa14: 0x00001000      0x00000003      0x00020000      0x00000000
0x56384ecbaa24: 0x00002000      0x00000003      0x00011000      0x00000000
0x56384ecbaa34: 0x00001000      0x00000003      0x00021000      0x00000000
0x56384ecbaa44: 0x00001000      0x00000003      0x00018000      0x00000000
0x56384ecbaa54: 0x00001000      0x00000003      0x00021000      0x00000000
0x56384ecbaa64: 0x00001000      0x00000003      0x00011000      0x00000000
0x56384ecbaa74: 0x00001000      0x00000003      0x00131000      0x00000000
0x56384ecbaa84: 0x00001000      0x00000003      0x00039000      0x00000000
0x56384ecbaa94: 0x00007000      0x00000003      0x00038000      0x00000000
0x56384ecbaaa4: 0x00001000      0x00000003      0x00002000      0x00000000
0x56384ecbaab4: 0x00008000      0x00000003      0x0003c000      0x00000000
0x56384ecbaac4: 0x00007000      0x00000003      0x0003c000      0x00000000
0x56384ecbaad4: 0x00008000      0x00000003      0x00038000      0x00000000
0x56384ecbaae4: 0x00008000      0x00000003      0x0003a000      0x00000000
0x56384ecbaaf4: 0x00006000      0x00000003      0x0003d000      0x00000000
0x56384ecbab04: 0x00007000      0x00000003      0x00039000      0x00000000
0x56384ecbab14: 0x00008000      0x00000003      0x00051000      0x00000000
0x56384ecbab24: 0x00007000      0x00000003      0x00051000      0x00000000
0x56384ecbab34: 0x00008000      0x00000003      0x0004e000      0x00000000
0x56384ecbab44: 0x00008000      0x00000003      0x00039000      0x00000000
0x56384ecbab54: 0x00001000      0x00000003      0x00003000      0x00000000
0x56384ecbab64: 0x00007000      0x00000003      0x00039000      0x00000000
0x56384ecbab74: 0x00007000      0x00000003      0x0003c000      0x00000000
0x56384ecbab84: 0x00007000      0x00000003      0x0003e000      0x00000000
0x56384ecbab94: 0x00007000      0x00000003      0x00037000      0x00000000
0x56384ecbaba4: 0x00001000      0x00000003      0x00003000      0x00000000
0x56384ecbabb4: 0x00007000      0x00000003      0x00053000      0x00000000
0x56384ecbabc4: 0x00008000      0x00000003      0x00053000      0x00000000
0x56384ecbabd4: 0x00007000      0x00000003      0x00052000      0x00000000
0x56384ecbabe4: 0x00006000      0x00000003      0x00037000      0x00000000
0x56384ecbabf4: 0x00001000      0x00000003      0x00003000      0x00000000
0x56384ecbac04: 0x00007000      0x00000003      0x0003d000      0x00000000
0x56384ecbac14: 0x00006000      0x00000003      0x00043000      0x00000000
0x56384ecbac24: 0x0000a000      0x00000003      0x00003000      0x00000000
0x56384ecbac34: 0x00002000      0x00000003      0x00013000      0x00000000
0x56384ecbac44: 0x00001000      0x00000003      0x0007d000      0x00000000
0x56384ecbac54: 0x0000a000      0x00000003      0x00004000      0x00000000
0x56384ecbac64: 0x00001000      0x00000003      0x0007b000      0x00000000
0x56384ecbac74: 0x00001000      0x00000003      0x00021000      0x00000000
0x56384ecbac84: 0x0000a000      0x00000003      0x00004000      0x00000000
0x56384ecbac94: 0x00002000      0x00000003      0x00015000      0x00000000
0x56384ecbaca4: 0x00001000      0x00000003      0x0007d000      0x00000000
0x56384ecbacb4: 0x0000b000      0x00000003      0x00015000      0x00000000
0x56384ecbacc4: 0x00002000      0x00000003      0x00014000      0x00000000
0x56384ecbacd4: 0x00003000      0x00000003      0x00079000      0x00000000
0x56384ecbace4: 0x00001000      0x00000003      0x00005000      0x00000000
0x56384ecbacf4: 0x00001000      0x00000003      0x000d6000      0x00000000
0x56384ecbad04: 0x00001000      0x00000003      0x00022000      0x00000000
0x56384ecbad14: 0x00001000      0x00000003      0x00daf000      0x00000000
0x56384ecbad24: 0x00001000      0x00000003      0x004de000      0x00000000
0x56384ecbad34: 0x00001000      0x00000003      0x01340000      0x00000000
0x56384ecbad44: 0x00001000      0x00000003      0x00564000      0x00000000
0x56384ecbad54: 0x00001000      0x00000003      0x001c5000      0x00000000
0x56384ecbad64: 0x00001000      0x00000003      0x00b05000      0x00000000
0x56384ecbad74: 0x0000f000      0x00000003      0x00162000      0x00000000
0x56384ecbad84: 0x0000f000      0x00000003      0x00125000      0x00000000
0x56384ecbad94: 0x0000f000      0x00000003      0x00055000      0x00000000
0x56384ecbada4: 0x0000f000      0x00000003      0x00361000      0x00000000
0x56384ecbadb4: 0x00001000      0x00000003      0x004fc000      0x00000000
0x56384ecbadc4: 0x0000d000      0x00000003      0x0000a000      0x00000000
0x56384ecbadd4: 0x00266000      0x00000003      0x01570000      0x00000000
0x56384ecbade4: 0x00001000      0x00000003      0x00091000      0x00000000
0x56384ecbadf4: 0x00001000      0x00000003      0x02834000      0x00000000
0x56384ecbae04: 0x00004000      0x00000003      0x0027f000      0x00000000
0x56384ecbae14: 0x00004000      0x00000003      0x00387000      0x00000000
0x56384ecbae24: 0x0000f000      0x00000003      0x000e7000      0x00000000
0x56384ecbae34: 0x0000f000      0x00000003      0x00096000      0x00000000
0x56384ecbae44: 0x00001000      0x00000003      0x0016c000      0x00000000
0x56384ecbae54: 0x00001000      0x00000003      0x00616000      0x00000000
0x56384ecbae64: 0x00001000      0x00000003      0x00248000      0x00000000
0x56384ecbae74: 0x00001000      0x00000003      0x0000b000      0x00000000
0x56384ecbae84: 0x00001000      0x00000003      0x00042000      0x00000000
0x56384ecbae94: 0x00001000      0x00000003      0x00002000      0x00000000
0x56384ecbaea4: 0x00002000      0x00000003      0x00005000      0x00000000
0x56384ecbaeb4: 0x00001000      0x00000003      0x00005000      0x00000000
0x56384ecbaec4: 0x00004000      0x00000003      0x00004000      0x00000000
0x56384ecbaed4: 0x00001000      0x00000003      0x00005000      0x00000000
0x56384ecbaee4: 0x00001000      0x00000003      0x0120f000      0x00000000
0x56384ecbaef4: 0x00001000      0x00000003      0x013c0000      0x00000000
0x56384ecbaf04: 0x00001000      0x00000003      0x0000b000      0x00000000
0x56384ecbaf14: 0x00001000      0x00000003      0x0006f000      0x00000000
0x56384ecbaf24: 0x00001000      0x00000003      0x00002000      0x00000000
0x56384ecbaf34: 0x00002000      0x00000003      0x00005000      0x00000000
0x56384ecbaf44: 0x00001000      0x00000003      0x00005000      0x00000000
0x56384ecbaf54: 0x00004000      0x00000003      0x00004000      0x00000000
0x56384ecbaf64: 0x00001000      0x00000003      0x00004000      0x00000000
0x56384ecbaf74: 0x00002000      0x00000003      0x005d7000      0x00000000
0x56384ecbaf84: 0x00001000      0x00000003      0x003c8000      0x00000000
0x56384ecbaf94: 0x00001000      0x00000003      0x00007000      0x00000000
0x56384ecbafa4: 0x008b3000      0x00000003      0x008c3000      0x00000000
0x56384ecbafb4: 0x81164000      0x00000003

The first column is basically (length, flags=hole+zero (3)), the second
column is (length, flags=allocated (0)).

Here's what's interesting: at address 0x56384ecb9674 (entry 568
decimal), the representation of the "flags" entry switches from big
endian to little endian. I don't understand that at all. The
representations of the lengths (BE vs. LE) also seem inconsistent!

Without having a clear representation (BE vs. LE) I can't even sum these
lengths and deduce anything. I'll probably have to build Rich's debug
patches tomorrow.

Comment 62 Richard W.M. Jones 2022-01-13 15:38:32 UTC
> nbd_block_status is repeatedly entered with count=2147483648 (2GiB --
> the chunk size of "get_disk_allocated") and offset=701497344
> (0x29D00000). There is no progress.

To be clear, nbd_block_status is being called over and over with
the same count and offset?  Are 2186 extents returned for each
request or some smaller number?

Anyway as it is likely that this is a bug in libnbd, I think we
should have a new BZ filed (better than changing the component
of this bug which is already very lengthy).

Comment 63 Laszlo Ersek 2022-01-13 18:59:35 UTC
(In reply to Richard W.M. Jones from comment #62)
> > nbd_block_status is repeatedly entered with count=2147483648 (2GiB
> > -- the chunk size of "get_disk_allocated") and offset=701497344
> > (0x29D00000). There is no progress.
>
> To be clear, nbd_block_status is being called over and over with
> the same count and offset?

Yes, that is my impression. I set a breakpoint with gdb in
nbd_block_status (or in the _unlocked (IIRC) helper function called by
the former), and it would fire approx. once per second, if left to run.
And everytime the count and offset parameters would be the same.

> Are 2186 extents returned for each request or some smaller number?

2186 extents returned for the above-specified request, every time
(offset=701497344, count=2147483648).

I calculated the number of extents manually, by following the libnbd
internals -- byte-swapping the length (byte count), then subtracting 4,
and dividing by 4.

Also I verified from the array's zeroth element (the meta context's ID)
that it was the "base:allocation" context.

> Anyway as it is likely that this is a bug in libnbd, I think we should
> have a new BZ filed (better than changing the component of this bug
> which is already very lengthy).

I agree.

So anyway, here's why I've logged back in this evening, briefly: during
dinner the following occurred to me.

<https://github.com/libguestfs/virt-v2v/commit/27c056cdc6aa>:

>            (* Get the list of extents, using a 2GiB chunk size as hint. *)
>            let size = NBD.get_size nbd
>            and allocated = ref 0_L
>            and fetch_offset = ref 0_L in
>            while !fetch_offset < size do
>              let remaining = size -^ !fetch_offset in
>              let fetch_size = min 0x8000_0000_L remaining in
>              NBD.block_status nbd fetch_size !fetch_offset
>                (fun ctx offset entries err ->
>                   assert (ctx = alloc_ctx);
>                   for i = 0 to Array.length entries / 2 - 1 do
>                     let len = Int64.of_int32 entries.(i * 2)
>                     and typ = entries.(i * 2 + 1) in
>                     if Int32.logand typ 1_l = 0_l then
>                       allocated := !allocated +^ len;
>                     fetch_offset := !fetch_offset +^ len
>                   done;
>                   0
>                )
>            done;
>            Some !allocated

<https://ocaml.org/api/Int32.html>:

> This module provides operations on the type int32 of signed 32-bit
> integers"

<https://ocaml.org/api/Int64.html>:

> This module provides operations on the type int64 of signed 64-bit
> integers"

<https://libguestfs.org/nbd_block_status.3.html>:

>    int (*callback) (void *user_data,
>                     const char *metacontext,
>                     uint64_t offset, uint32_t *entries,
>                     size_t nr_entries, int *error);

and

> The count parameter is a hint: the server may choose to return less
> status, or the final block may extend beyond the requested range.

and

> The entries array is an array of pairs of integers with the first
> entry in each pair being the length (in bytes) of the block and the
> second entry being a status/flags field which is specific to the
> metadata context. (The number of pairs passed to the function is
> nr_entries/2.)

So: regardless of how *small* a "count" we pass to nbd_block_status(),
the resultant "entries" array may terminate with a uint32_t (note:
unsigned!!!) chunk length that's >= 2GiB. Put differently, its most
significant bit may be set. I think the length 0x81164000 in comment 61
is actually an example of that (at that point the entries in the array
seem to be little-endian already).

Then, we do this in OCaml:

>                     let len = Int64.of_int32 entries.(i * 2)

"entries.(i * 2)" is negative in this case (!), and so is "len".

Which means that we move backwards with "fetch_offset".

Basically the OCaml "int32" type is unsuitable for representing the
elements of the "entries" array (from nbd_block_status).

It's late so I'll only check tomorrow if (a) this is what's happening,
and whether (b) we should fix this in the NBD OCaml bindings, or just
work it around in "Utils.get_disk_allocated". (We should be able to
easily work it around: if "len", of type "int64", is negative, then add
0x1_0000_0000_L to it.)

Comment 64 Richard W.M. Jones 2022-01-13 19:05:38 UTC
Ugh, that's a problem in the bindings, we should be using int64 to
represent the entries.  That wasn't obvious at all.  However it's
an easy fix in libnbd & we might as well fix this properly instead
of half-arsing it.

Comment 65 Laszlo Ersek 2022-01-14 06:54:34 UTC
(In reply to Laszlo Ersek from comment #63)

> It's late so I'll only check tomorrow if (a) this is what's happening,

Yes, it is:

> commit 99fd715b7dd0577d3b5c782a914a2d9b99d0807c
> Author: Laszlo Ersek <lersek>
> Date:   Fri Jan 14 06:07:59 2022 +0100
> 
>     lib/utils: get_disk_allocated: assert that NBD block length is positive
>     
>     It makes no sense for the NBD server to return a zero-length block, plus
>     it used to be a bug in the libnbd OCaml bindings to wrap 32-bit block
>     lengths with the MSB set to negative (signed) 32-bit integers (which would
>     be then widened to negative (signed) 64-bit integers).
>     
>     Any non-positive "len" value breaks the progression of "fetch_offset",
>     potentially leading to an infinite loop.
>     
>     Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2027598
>     Signed-off-by: Laszlo Ersek <lersek>
> 
> diff --git a/lib/utils.ml b/lib/utils.ml
> index 4c43a4b5161d..f599b0e32450 100644
> --- a/lib/utils.ml
> +++ b/lib/utils.ml
> @@ -197,6 +197,7 @@ let get_disk_allocated ~dir ~disknr =
>                    for i = 0 to Array.length entries / 2 - 1 do
>                      let len = Int64.of_int32 entries.(i * 2)
>                      and typ = entries.(i * 2 + 1) in
> +                    assert (len > 0_L);
>                      if Int32.logand typ 1_l = 0_l then
>                        allocated := !allocated +^ len;
>                      fetch_offset := !fetch_offset +^ len

Brew scratch build: task 42395203 (virt-v2v-1.45.96-1.rhbz2027598.el9).

Result:

# LIBGUESTFS_BACKEND=direct virt-v2v \
  -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 \
  -it vddk \
  -io vddk-libdir=/root/vddk_libdir/latest \
  -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  \
  -ip /v2v-ops/esxpw  \
  -o rhv \
  -os 10.73.224.195:/home/nfs_export \
  -b ovirtmgmt \
  esx6.7-rhel8.4-x86_64

[   1.8] Opening the source
[   7.7] Inspecting the source
[  13.9] Checking for sufficient free disk space in the guest
[  13.9] Converting Red Hat Enterprise Linux 8.4 (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  49.0] Mapping filesystem data to avoid copying unused and blank areas
[  50.2] Closing the overlay
[  50.3] Assigning disks to buses
[  50.3] Checking if the guest needs BIOS or UEFI to boot
[  51.8] Copying disk 1/1
█ 100% [****************************************]
[ 501.1] Creating output metadata
libnbd: extent: uncaught OCaml exception: Assert_failure("utils.ml", 200, 20)
Aborted (core dumped)

Comment 66 Laszlo Ersek 2022-01-14 06:58:52 UTC
(In reply to Richard W.M. Jones from comment #64)
> Ugh, that's a problem in the bindings, we should be using int64 to
> represent the entries.  That wasn't obvious at all.  However it's
> an easy fix in libnbd & we might as well fix this properly instead
> of half-arsing it.

Yup, here's the bug (in the generated file "ocaml/NBD.ml" in libnbd):

> external block_status : ?flags:CMD_FLAG.t list -> t -> int64 -> int64 -> (string -> int64 -> int32 array -> int ref -> int) -> unit
>     = "nbd_internal_ocaml_nbd_block_status"                                                  ^^^^^

I would argue that this is a general problem in the generator (likely
rooted in the ocaml type system).

The *source* that drives this particular output is in file
"generator/API.ml":

> let extent_closure = {
>   cbname = "extent";
>   cbargs = [ CBString "metacontext";
>              CBUInt64 "offset";
>              CBArrayAndLen (UInt32 "entries",
>                             "nr_entries");
>              CBMutable (Int "error") ]
> }

and here we correctly use UInt32. However at least the following
mappings are incorrect (unsafe):

> (* String representation of a single OCaml arg. *)
> and ocaml_arg_to_string = function
> [...]
>   | SizeT _ -> "int" (* OCaml int type is always sufficient for counting *)
> [...]
>   | UInt _ | UIntPtr _ -> "int"
>   | UInt32 _ -> "int32"
>   | UInt64 _ -> "int64"

From these, at least SizeT, UInt and UIntPtr are unlikely to occur on
the network (in wire formats); UInt32 and UInt64 seem like real problems
though.

We can update

>              CBArrayAndLen (UInt32 "entries",
>                             "nr_entries");

to UInt64 in "extent_closure" for now, but that will not solve the
general problem.

Comment 67 Laszlo Ersek 2022-01-14 07:28:27 UTC
So, this doesn't work:

> diff --git a/generator/API.ml b/generator/API.ml
> index cf2e7543f9b7..f287711c6f6c 100644
> --- a/generator/API.ml
> +++ b/generator/API.ml
> @@ -138,7 +138,7 @@
>    cbname = "extent";
>    cbargs = [ CBString "metacontext";
>               CBUInt64 "offset";
> -             CBArrayAndLen (UInt32 "entries",
> +             CBArrayAndLen (UInt64 "entries",
>                              "nr_entries");
>               CBMutable (Int "error") ]
>  }

Because (minimally):

> Fatal error: exception File "C.ml", line 241, characters 27-33: Assertion failed
> Raised at Utils.pr_wrap in file "utils.ml", line 164, characters 43-52
> Called from C.print_cbarg_list in file "C.ml", line 222, characters 4-73
> Called from C.print_closure_structs.(fun) in file "C.ml", line 277, characters 6-40
> Called from Stdlib__list.iter in file "list.ml", line 110, characters 12-15
> Called from C.print_closure_structs in file "C.ml", line 273, characters 2-407
> Called from C.generate_include_libnbd_h in file "C.ml", line 391, characters 2-26
> Called from Utils.output_to in file "utils.ml", line 256, characters 2-6
> Called from Generator in file "generator.ml", line 39, characters 2-58

In function print_cbarg_list', file "generator/C.ml":

>       match cbarg with
>       | CBArrayAndLen (UInt32 n, len) ->
>          if types then pr "uint32_t *";
>          pr "%s, " n;
>          if types then pr "size_t ";
>          pr "%s" len
>       | CBArrayAndLen _ -> assert false

And anyway, if we changed the public API definition, *all* language
bindings would be affected by that, even those that are not affected by
the uint32_t -> int32_t conversion bug.

Alas, this is really a problem with the generator. Refer to function
extent_wrapper_locked() in the generated file "ocaml/nbd-c.c":

> /* Wrapper for extent callback. */
> static int
> extent_wrapper_locked (void *user_data, const char *metacontext,
>                        uint64_t offset, uint32_t *entries,
>                        size_t nr_entries, int *error)
> {
>   CAMLparam0 ();
>   CAMLlocal4 (metacontextv, offsetv, entriesv, errorv);
>   [...]
>   entriesv = nbd_internal_ocaml_alloc_int32_array (entries, nr_entries);
>   [...]
>   args[2] = entriesv;
>   [...]
> }

Note that the function takes "uint32_t *entries" correctly, but then
transforms it to an int32_t array using the
nbd_internal_ocaml_alloc_int32_array() helper, from "ocaml/helpers.c":

> value
> nbd_internal_ocaml_alloc_int32_array (uint32_t *a, size_t len)
> {
>   CAMLparam0 ();
>   CAMLlocal2 (v, rv);
>   size_t i;
>
>   rv = caml_alloc (len, 0);
>   for (i = 0; i < len; ++i) {
>     v = caml_copy_int32 (a[i]);
>     Store_field (rv, i, v);
>   }
>
>   CAMLreturn (rv);
> }

The caml_copy_int32() call is the actual bug: it takes a uint32_t, and
turns it into a signed 32-bit OCaml integer.

It's really sad that OCaml has no unsigned integer types.

I'll try to look into replacing nbd_internal_ocaml_alloc_int32_array()
with nbd_internal_ocaml_alloc_int64_from_uint32_array().

Comment 68 Richard W.M. Jones 2022-01-14 10:12:27 UTC
Right, the fix is to translate CBArrayAndLen (UInt32 ...) to an array
of int64, *only* in the OCaml bindings.  The other bindings are presumably
correct if they have a native 32 bit unsigned type.  We need a new bug for this,
but it's an easy fix.

BTW about OCaml unsigned types.  There are at least two libraries adding them,
but we're trying to minimize dependencies (especially for RHEL)
https://github.com/andrenth/ocaml-stdint
https://github.com/ocamllabs/ocaml-integers

Comment 69 Laszlo Ersek 2022-01-14 13:52:28 UTC
[Libguestfs] [v2v PATCH 0/2] lib/utils: get_disk_allocated: adopt the int64 element type for "entries"
https://listman.redhat.com/archives/libguestfs/2022-January/msg00094.html
Message-Id: <20220114135048.25243-1-lersek>

Comment 72 Laszlo Ersek 2022-01-17 14:31:52 UTC
(In reply to Laszlo Ersek from comment #69)
> [Libguestfs] [v2v PATCH 0/2] lib/utils: get_disk_allocated: adopt the int64 element type for "entries"
> https://listman.redhat.com/archives/libguestfs/2022-January/msg00094.html
> Message-Id: <20220114135048.25243-1-lersek>

Upstream commit range 4578887821d8..a2afed32d8b1.

Comment 76 Vera 2022-01-20 12:37:10 UTC
Verified with the versions:
qemu-img-6.2.0-4.el9.x86_64
libvirt-libs-8.0.0-1.el9.x86_64
libguestfs-1.46.1-2.el9.x86_64
guestfs-tools-1.46.1-6.el9.x86_64
nbdkit-1.28.4-2.el9.x86_64
virt-v2v-1.45.97-1.el9.x86_64
rhv 4.4.8.3-0.10.el8ev

Steps:
1.Convert a guest from VMware to rhv4.4 via -o rhv by v2v
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/root/vddk_libdir/latest -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -ip /v2v-ops/esxpw   -o rhv -os 10.73.224.195:/home/nfs_export -b ovirtmgmt  esx6.7-rhel8.4-x86_64
[   1.8] Opening the source
[   8.4] Inspecting the source
[  17.2] Checking for sufficient free disk space in the guest
[  17.2] Converting Red Hat Enterprise Linux 8.4 (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  65.5] Mapping filesystem data to avoid copying unused and blank areas
[  66.0] Closing the overlay
[  66.3] Assigning disks to buses
[  66.3] Checking if the guest needs BIOS or UEFI to boot
[  68.2] Copying disk 1/1
█ 100% [****************************************]
[ 459.9] Creating output metadata
[ 467.2] Finishing off

2. Check if the guest is in "Storage--> Storage Domains--->nfs_export --> VM Import" on rhv4.4 after conversion

3. Import the guest and check the stauts on rhv4.4



Moving to Verified.

Comment 78 errata-xmlrpc 2022-05-17 13:41:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: virt-v2v), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:2566


Note You need to log in before you can comment on or make changes to this bug.