RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2089623 - Virt-v2v can't convert rhel8.6 guest from VMware on rhel8.6
Summary: Virt-v2v can't convert rhel8.6 guest from VMware on rhel8.6
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: virt-v2v
Version: 8.6
Hardware: x86_64
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Xiaodai Wang
URL:
Whiteboard:
: 2089609 (view as bug list)
Depends On:
Blocks: 2093415
TreeView+ depends on / blocked
 
Reported: 2022-05-24 07:48 UTC by mxie@redhat.com
Modified: 2022-11-08 09:45 UTC (History)
10 users (show)

Fixed In Version: virt-v2v-1.42.0-20.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2093415 (view as bug list)
Environment:
Last Closed: 2022-11-08 09:19:55 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-123123 0 None None None 2022-05-24 07:50:46 UTC
Red Hat Product Errata RHSA-2022:7472 0 None None None 2022-11-08 09:20:34 UTC

Description mxie@redhat.com 2022-05-24 07:48:47 UTC
Description of problem:
Virt-v2v can't convert rhel8.6 guest from VMware on rhel8.6

Version-Release number of selected component (if applicable):
virt-v2v-1.42.0-18.module+el8.6.0+14480+c0a3aa0f.x86_64
libguestfs-1.44.0-5.module+el8.6.0+14480+c0a3aa0f.x86_64
libvirt-libs-8.0.0-5.2.module+el8.6.0+15256+3a0914fe.x86_64
qemu-img-6.2.0-11.module+el8.6.0+14707+5aa4b42d.x86_64
nbdkit-1.24.0-4.module+el8.6.0+14480+c0a3aa0f.x86_64


How reproducible:
100%

Steps to Reproduce:
1.Convert a rhel8.6 guest from VMware via vddk7.0.3 by v2v
#  virt-v2v -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.6-x86_64 -it vddk -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21 -ip /home/passwd
[   0.0] Opening the source -i libvirt -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.6-x86_64 -it vddk  -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21
[   1.6] Creating an overlay to protect the source from being modified
[   2.3] Opening the overlay
[   6.9] Inspecting the overlay
[  46.0] Checking for sufficient free disk space in the guest
[  46.0] Estimating space required on target for each disk
[  46.0] Converting Red Hat Enterprise Linux 8.6 (Ootpa) to run on KVM
virt-v2v: error: no installed kernel packages were found.

This probably indicates that virt-v2v was unable to inspect this guest 
properly.

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


2.Convert a rhel8.6 guest from VMware via vddk6.7 by v2v

#  virt-v2v -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.6-x86_64 -it vddk -io vddk-libdir=/home/vddk6.7 -io vddk-thumbprint=76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21 -ip /home/passwd
[   0.0] Opening the source -i libvirt -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.6-x86_64 -it vddk  -io vddk-libdir=/home/vddk6.7 -io vddk-thumbprint=76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21
[   1.6] Creating an overlay to protect the source from being modified
[   2.2] Opening the overlay
[   9.5] Inspecting the overlay
[  48.5] Checking for sufficient free disk space in the guest
[  48.5] Estimating space required on target for each disk
[  48.5] Converting Red Hat Enterprise Linux 8.6 (Ootpa) to run on KVM
virt-v2v: error: no installed kernel packages were found.

This probably indicates that virt-v2v was unable to inspect this guest 
properly.

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


3.Convert a rhel8.6 guest from VMware without vddk by v2v
virt-v2v -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.6-x86_64  -ip /home/passwd
[   0.0] Opening the source -i libvirt -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.6-x86_64

[   2.6] Creating an overlay to protect the source from being modified
[   3.6] Opening the overlay
[  35.0] Inspecting the overlay
[ 296.6] Checking for sufficient free disk space in the guest
[ 296.6] Estimating space required on target for each disk
[ 296.6] Converting Red Hat Enterprise Linux 8.6 (Ootpa) to run on KVM
virt-v2v: error: no installed kernel packages were found.

This probably indicates that virt-v2v was unable to inspect this guest 
properly.

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


Actual results:
As above description

Expected results:
Virt-v2v can convert rhel8.6 guest from VMware on rhel8.6

Additional info:
1.Virt-v2v can convert rhel8.5 guest from VMware on rhel8.6

# virt-v2v -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.5-x86_64 -it vddk -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21 -ip /home/passwd
[   0.0] Opening the source -i libvirt -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.5-x86_64 -it vddk  -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21
[   1.6] Creating an overlay to protect the source from being modified
[   2.3] Opening the overlay
[   6.6] Inspecting the overlay
[  14.0] Checking for sufficient free disk space in the guest
[  14.0] Estimating space required on target for each disk
[  14.0] Converting Red Hat Enterprise Linux 8.5 (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  87.2] Mapping filesystem data to avoid copying unused and blank areas
[  87.7] Closing the overlay
[  87.9] Assigning disks to buses
[  87.9] Checking if the guest needs BIOS or UEFI to boot
[  87.9] Initializing the target -o libvirt -os default
[  88.0] Copying disk 1/1 to /var/lib/libvirt/images/esx7.0-rhel8.5-x86_64-sda (raw)
    (60.22/100%)


2.Can't reproduce the bug on rhel9.1
virt-v2v-2.0.5-1.el9.x86_64
libguestfs-1.48.2-2.el9.x86_64
guestfs-tools-1.48.1-1.el9.x86_64
nbdkit-server-1.30.5-1.el9.x86_64
libnbd-1.12.2-1.el9.x86_64

Comment 2 Richard W.M. Jones 2022-05-24 08:03:42 UTC
This looks like a probable duplicate of bug 2089609.

The cause of this bug is:

error: db5 error(11) from dbenv->open: Resource temporarily unavailable
error: cannot open Packages index using db5 - Resource temporarily unavailable (11)
error: cannot open Packages database in 
error: db5 error(11) from dbenv->open: Resource temporarily unavailable
error: cannot open Packages index using db5 - Resource temporarily unavailable (11)
error: cannot open Packages database in 
librpm returned 0 installed packages

which looks similar to bug 2038786.

Which version of "rpm" is installed on the host?  (I want to check
that it isn't bug 1965147)

Comment 3 Richard W.M. Jones 2022-05-24 08:05:45 UTC
*** Bug 2089609 has been marked as a duplicate of this bug. ***

Comment 6 Richard W.M. Jones 2022-05-24 08:48:18 UTC
Version of rpm on the host is: rpm-4.14.3-23.el8.x86_64

Comment 7 Xiaodai Wang 2022-05-24 09:40:53 UTC
This is the good log on RHEL9.1.

chroot: /sysroot: running 'librpm'
warning: Found bdb_ro Packages database while attempting bdb backend: using bdb_ro backend.
librpm returned 1381 installed packages

I didn't see the warning in the failure log.

The warning might be worth suspecting.

Comment 10 Richard W.M. Jones 2022-05-24 19:02:13 UTC
(In reply to Xiaodai Wang from comment #7)
> This is the good log on RHEL9.1.
> 
> chroot: /sysroot: running 'librpm'
> warning: Found bdb_ro Packages database while attempting bdb backend: using
> bdb_ro backend.
> librpm returned 1381 installed packages
> 
> I didn't see the warning in the failure log.
> 
> The warning might be worth suspecting.

I think this warning is normal when inspecting a RHEL <= 8 guest on RHEL 9
host (ie. bug 2038786).  It happens because the newer RPM in RHEL 9
uses multiple RPM database backends, and to read RHEL <= 8 guests it must
use the non-default bdb_ro backend, so it emits a warning.

However in this bug which happens on RHEL 8 host, which does not have
the multiple RPM database backends, no such warning is expected.

Comment 11 Xiaodai Wang 2022-05-25 03:12:43 UTC
I cloned the guest on VMWare and can reproduce it by the cloned guest.
Then I logged in the guest and rebuild the db by 'rpmdb --rebuilddb' and
the issue has gone. So I think this is not a blocker.

@mxie Was the guest upgraded from other rhel version to rhel8.6? or Did
you create it by a fresh installation?

Comment 12 Richard W.M. Jones 2022-05-25 13:37:29 UTC
It's very hard to reproduce this bug.  I can sometimes reproduce it
using the VMware server & guest supplied and locally running virt-v2v,
but I was never able to make a completely local reproducer.

Nevertheless I think it is a real bug in libguestfs, and my analysis
is below.  It's caused by a whole cascade of issues.

Firstly as background, in RHEL <= 8, RPM used the Berkeley-format
database ("BDB").  In RHEL >= 9, it uses a Sqlite database, because of
licensing changes to BDB made by Oracle.  The RHEL 9 RPM is able to
read(-only) the old BDB format, but it does it by some custom C code
which hand-parses the BDB format.  Oracle BDB code is no longer used
in RHEL 9.

In pictures, in RHEL 8:

  libguestfs -> librpm -> BDB -> database file

In RHEL 9:

  libguestfs -> librpm -> custom parser -> database file of RHEL <= 8 guest
                    \
                     ---> sqlite -> database file of RHEL 9+ guest

In RHEL 8 librpm, we have this code for opening the RPM database:

  https://github.com/rpm-software-management/rpm/blob/061ba962297eba71ecb1b45a4133cbbd86f8450e/lib/backend/db3.c#L492-L510

dbenv->open is a call into this BDB function:

  https://github.com/berkeleydb/libdb/blob/5b7b02ae052442626af54c176335b67ecc613a30/src/db/db_open.c#L52

  #0  __db_open (dbp=dbp@entry=0x5555557dc0c0, ip=0x5555557b4b28, txn=0x0, 
      fname=fname@entry=0x7ffff7bae6f3 "Packages", dname=dname@entry=0x0, 
      type=type@entry=DB_UNKNOWN, flags=1024, mode=420, meta_pgno=0)
      at ../../src/db/db_open.c:61
  #1  0x00007ffff7251298 in __db_open_pp (dbp=0x5555557dc0c0, 
      txn=<optimized out>, fname=0x7ffff7bae6f3 "Packages", dname=0x0, 
      type=DB_UNKNOWN, flags=1024, mode=420) at ../../src/db/db_iface.c:1193
  #2  0x00007ffff7b6a67b in db3_dbiOpen () from /lib64/librpm.so.8

It's not very clear what exactly in __db_open fails and returns
EAGAIN, but my guess would be it's failing when trying to lock the BDB
file.  Whatever the reason, dbenv->open does fail and returns EAGAIN,
and the librpm code will jump to the errxit: label and return.
Eventually the top level call into librpm (rpmtsInitIterator) should
return NULL.

libguestfs however ignores the error and continues:

  https://github.com/libguestfs/libguestfs/blob/9e69a38d8234db95d6786422a7a43962f7bac352/daemon/rpm-c.c#L101

The end result is that we iterate over a NULL iterator.  Surprisingly
librpm doesn't crash.  In fact rpmdbNextIterator contains a check
where if the iterator passed in == NULL, then it just returns NULL:

  https://github.com/rpm-software-management/rpm/blob/061ba962297eba71ecb1b45a4133cbbd86f8450e/lib/rpmdb.c#L1467

But we return a zero length list of applications (which is impossible
for a normal RPM-based guest).

So this is firmly a bug in libguestfs because of our lack of any
effort at error checking.

However it's also a kind of weird problem with this guest, since the
database in this guest (or maybe something about the filesystem?) is
causing the database open or lock to return the EAGAIN error.

If we fix the error checking bug in libguestfs then virt-v2v would
still fail on this guest, but we'd get a clearer error message.

It may be that what we should do in this situation is to run "rpmdb
--rebuilddb" (as suggested by Xiaodai) and try again.  Without being
able to reproduce this locally it's hard for me to test that theory.

Anyway I can come up with a patch to improve libguestfs error
checking, and another patch for the rebuilddb thing, and we can try.

This particular bug probably doesn't affect RHEL 9.  As explained
above, RHEL 9 uses custom parsing code:

https://github.com/rpm-software-management/rpm/blob/master/lib/backend/bdb_ro.c

This code doesn't try to do anything fancy with locks, and only does a
plain "open" on the file, and opens the Packages file directly (not
the __db* files), so it seems unlikely that anything could return
EAGAIN here.

Comment 13 Richard W.M. Jones 2022-05-25 16:19:24 UTC
Proposing two loosely related patches to fix this:

https://listman.redhat.com/archives/libguestfs/2022-May/028981.html
https://listman.redhat.com/archives/libguestfs/2022-May/028980.html

Comment 14 mxie@redhat.com 2022-05-26 02:15:06 UTC
(In reply to Xiaodai Wang from comment #11)
> I cloned the guest on VMWare and can reproduce it by the cloned guest.
> Then I logged in the guest and rebuild the db by 'rpmdb --rebuilddb' and
> the issue has gone. So I think this is not a blocker.
> 
> @mxie Was the guest upgraded from other rhel version to rhel8.6? or Did
> you create it by a fresh installation?

I remembered esx7.0-rhel8.6-x86_64 was fresh installed last month, I just installed GUI OS on it as usual. And I can't reproduce the bug on esx6.7-rhel8.6-x86_64 which was created by vwu, I have no idea what happened to esx7.0-rhel8.6-x86_64, xiaodai, did you do something to esx7.0-rhel8.6-x86_64? I can't reproduce the bug on esx7.0-rhel8.6-x86_64 now

# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA   esx6.7-rhel8.6-x86_64 -ip /home/passwd
[   0.0] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel8.6-x86_64 -it vddk  -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   1.9] Creating an overlay to protect the source from being modified
[   3.0] Opening the overlay
[   7.8] Inspecting the overlay
[  19.6] Checking for sufficient free disk space in the guest
[  19.6] Estimating space required on target for each disk
[  19.6] Converting Red Hat Enterprise Linux 8.6 (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 134.0] Mapping filesystem data to avoid copying unused and blank areas
[ 135.1] Closing the overlay
[ 135.4] Assigning disks to buses
[ 135.4] Checking if the guest needs BIOS or UEFI to boot
[ 135.4] Initializing the target -o libvirt -os default
[ 135.4] Copying disk 1/1 to /var/lib/libvirt/images/esx6.7-rhel8.6-x86_64-sda (raw)
    (100.00/100%)
[ 327.6] Creating output metadata
[ 327.7] Finishing off


# virt-v2v -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.6-x86_64 -it vddk -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21 -ip /home/passwd
[   0.0] Opening the source -i libvirt -ic vpx://root.227.27/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.6-x86_64 -it vddk  -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21
[   1.7] Creating an overlay to protect the source from being modified
[   2.8] Opening the overlay
[   7.3] Inspecting the overlay
[  14.7] Checking for sufficient free disk space in the guest
[  14.7] Estimating space required on target for each disk
[  14.7] Converting Red Hat Enterprise Linux 8.6 (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  99.7] Mapping filesystem data to avoid copying unused and blank areas
[ 100.8] Closing the overlay
[ 101.0] Assigning disks to buses
[ 101.0] Checking if the guest needs BIOS or UEFI to boot
[ 101.0] Initializing the target -o libvirt -os default
[ 101.1] Copying disk 1/1 to /var/lib/libvirt/images/esx7.0-rhel8.6-x86_64-sda (raw)
^C  (6.05/100%)

Comment 15 Richard W.M. Jones 2022-05-26 07:30:34 UTC
> I have no idea what happened to esx7.0-rhel8.6-x86_64

The bug is caused by corruption in the RPM database, which can happen in
several ways (eg. abrupt shutdown during some RPM operation), but is not
easy to reproduce.  I could not find a way to deliberately corrupt files
in the database that was also recoverable using "rpmdb --rebuilddb".  So
I don't think it will now be easy to reproduce the bug unless you have
the original "esx7.0-rhel8.6-x86_64" guest.

Nevertheless it's still a real bug and I've posted a possible fix upstream.

Comment 18 Richard W.M. Jones 2022-05-26 09:30:00 UTC
Upstream in:

https://github.com/libguestfs/libguestfs/commit/488245ed6c0c5db282ec7fed646e8bc00ce0d487
https://github.com/libguestfs/virt-v2v/commit/31bf5db25bcfd8a9f5a48cc0523abae28861de9a

For RHEL 8 (this bug) only the virt-v2v fix is actually needed, so we don't
need to clone this bug for libguestfs.

Comment 19 Xiaodai Wang 2022-05-27 04:01:03 UTC
I reproduced the bug by the image located in our NFS server.

# LIBGUESTFS_BACKEND=direct virt-v2v -i disk v2v/Auto-kvm-rhel7.1-sparseqcow2.img -o null
[   0.0] Opening the source -i disk v2v/Auto-kvm-rhel7.1-sparseqcow2.img
[   0.0] Creating an overlay to protect the source from being modified
[   0.1] Opening the overlay
[   5.3] Inspecting the overlay
[  47.8] Checking for sufficient free disk space in the guest
[  47.8] Estimating space required on target for each disk
[  47.8] Converting Red Hat Enterprise Linux Server 7.1 (Maipo) to run on KVM
virt-v2v: error: no installed kernel packages were found.

This probably indicates that virt-v2v was unable to inspect this guest 
properly.

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]
  

Then updated the virt-v2v to virt-v2v-1.42.0-20.module+el8.7.0+15439+b226b934.x86_64.

Repeat the command:

libguestfs: trace: v2v: inspect_get_package_format = "rpm"
libguestfs: trace: v2v: internal_list_rpm_applications
guestfsd: => inspect_get_package_format (0x1e5) took 0.01 secs^M
guestfsd: <= internal_list_rpm_applications (0x1fe) request length 40 bytes^M
command: mount returned 0^M
chroot: /sysroot: running 'librpm'^M
\x1b[31;1merror: \x1b[0m\x1b[1mdb5 error(11) from dbenv->open: Resource temporarily unavailable^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mcannot open Packages index using db5 - Resource temporarily unavailable (11)^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mcannot open Packages database in ^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mdb5 error(11) from dbenv->open: Resource temporarily unavailable^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mcannot open Packages index using db5 - Resource temporarily unavailable (11)^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mcannot open Packages database in ^M
\x1b[0mlibrpm returned 0 installed packages^M
guestfsd: => internal_list_rpm_applications (0x1fe) took 36.14 secs^M
libguestfs: trace: v2v: internal_list_rpm_applications = <struct guestfs_application2_list(0)>
libguestfs: trace: v2v: inspect_list_applications2 = <struct guestfs_application2_list(0)>
no applications returned
rebuilding RPM database and retrying ...
libguestfs: trace: v2v: sh "rpmdb --rebuilddb"
...
libguestfs: trace: v2v: inspect_get_package_format = "rpm"
libguestfs: trace: v2v: internal_list_rpm_applications
guestfsd: => inspect_get_package_format (0x1e5) took 0.02 secs^M
guestfsd: <= internal_list_rpm_applications (0x1fe) request length 40 bytes^M
chroot: /sysroot: running 'librpm'^M
librpm returned 1214 installed packages^M


As in the logs, the packages can be detected successfully after rebuilding the rpmdb.
so this patch works for this issue.

Comment 22 Richard W.M. Jones 2022-05-27 07:48:17 UTC
Requesting zstream 8.6.0.z for this bug.

The bug was introduced in RHEL 8.6 when we switched over to using librpm
to parse the RPM database (in bug 1836094).

Comment 23 Xiaodai Wang 2022-05-30 03:10:17 UTC
Based on the comment 19 and the verification again.
The fix works well against the issue.

# LIBGUESTFS_BACKEND=direct virt-v2v -i disk v2v/Auto-kvm-rhel7.1-sparseqcow2.img -o null 
[   0.0] Opening the source -i disk v2v/Auto-kvm-rhel7.1-sparseqcow2.img
[   0.0] Creating an overlay to protect the source from being modified
[   0.1] Opening the overlay
[  11.1] Inspecting the overlay
[ 101.5] Checking for sufficient free disk space in the guest
[ 101.5] Estimating space required on target for each disk
[ 101.5] Converting Red Hat Enterprise Linux Server 7.1 (Maipo) to run on KVM
virt-v2v: warning: /files/boot/grub2/device.map/hd0 references unknown 
device "vda".  You may have to fix this entry manually after conversion.
virt-v2v: This guest has virtio drivers installed.
[ 445.8] Mapping filesystem data to avoid copying unused and blank areas
[ 447.5] Closing the overlay
[ 447.6] Assigning disks to buses
[ 447.6] Checking if the guest needs BIOS or UEFI to boot
[ 447.6] Initializing the target -o null
[ 447.6] Copying disk 1/1 to qemu URI json:{ "file.driver": "null-co", "file.size": "1E" } (raw)
    (100.00/100%)
[ 641.6] Creating output metadata
[ 641.6] Finishing off


So move the bug to VERIFIED.

Comment 27 errata-xmlrpc 2022-11-08 09:19:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7472


Note You need to log in before you can comment on or make changes to this bug.