RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2038786 - cannot open Packages index using db5 - Resource temporarily unavailable
Summary: cannot open Packages index using db5 - Resource temporarily unavailable
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: virt-v2v
Version: 8.6
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: tingting zheng
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-10 05:53 UTC by Xiaodai Wang
Modified: 2022-05-05 08:56 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-05 08:56:43 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-107217 0 None None None 2022-01-10 05:58:35 UTC

Description Xiaodai Wang 2022-01-10 05:53:14 UTC
Description of problem:
cannot open Packages index using db5 - Resource temporarily unavailable

Version-Release number of selected component (if applicable):
virt-v2v-1.42.0-18.module+el8.6.0+13447+4b5d0856.x86_64
libguestfs-1.44.0-5.module+el8.6.0+13732+b2b9b31d.x86_64
rpm-4.14.3-20.el8.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Put a image file into a NFS server and mount it to local dir.
2. Run virt-v2v command to convert the disk.
# ll v2v/Auto-kvm-rhel7.1-sparseqcow2.img
-rw-r--r--. 1 nobody nobody 3936354304 Jun 19  2015 v2v/Auto-kvm-rhel7.1-sparseqcow2.img
# virt-v2v -i disk v2v/Auto-kvm-rhel7.1-sparseqcow2.img -o null
[   0.0] Opening the source -i disk v2v/Auto-kvm-rhel7.1-sparseqcow2.img
[   0.1] Creating an overlay to protect the source from being modified
[   0.2] Opening the overlay
[   5.5] Inspecting the overlay
[  45.2] Checking for sufficient free disk space in the guest
[  45.2] Estimating space required on target for each disk
[  45.2] Converting Red Hat Enterprise Linux Server 7.1 (Maipo) to run on KVM
virt-v2v: error: no installed kernel packages were found.

This probably indicates that virt-v2v was unable to inspect this guest 
properly.

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]
3. Check the v2v debug log(should enable it by '-v -x').


Actual results:
libguestfs: trace: v2v: inspect_get_package_format = "rpm"
libguestfs: trace: v2v: internal_list_rpm_applications
guestfsd: => inspect_get_package_format (0x1e5) took 0.01 secs^M
guestfsd: <= internal_list_rpm_applications (0x1fe) request length 40 bytes^M
command: mount returned 0^M
chroot: /sysroot: running 'librpm'^M
\x1b[31;1merror: \x1b[0m\x1b[1mdb5 error(11) from dbenv->open: Resource temporarily unavailable^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mcannot open Packages index using db5 - Resource temporarily unavailable (11)^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mcannot open Packages database in ^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mdb5 error(11) from dbenv->open: Resource temporarily unavailable^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mcannot open Packages index using db5 - Resource temporarily unavailable (11)^M
\x1b[0m\x1b[31;1merror: \x1b[0m\x1b[1mcannot open Packages database in ^M
\x1b[0mlibrpm returned 0 installed packages^M

Expected results:
The librpm should be called successfully.

Additional info:
1) I can't reproduce it by other images in the same NFS directory. Only 'Auto-kvm-rhel7.1-sparseqcow2.img' has problem. When I first met this issue, after I copied it from NFS to a local directory, I couldn't reproduce it. But now the same step can also reproduce it. I'm not sure what's the difference. 
Now that it can be reproduced after copying it to a local directory from NFS, I created a VM by virt-install by importing the disk and the system seems fine. then power-off it and try the v2v command again, the issue is gone.
But it can still be reproduce if recopying a new image or run it directly in NFS directory. 
Maybe the rpmdb files in the guest has problem? and they are recovered after a new boot?
2) Another similar scenario can generate same error.
   2.1 Remove the rpm db fiels. 
       # rm /var/lib/rpm/__db.*
   2.2 Create a blank file instead. 
       # touch /var/lib/rpm/__db.001
   2.3 poweroff the guest, and convert it by v2v.
   2.4 The same error is reported.

Comment 2 Richard W.M. Jones 2022-01-10 10:29:14 UTC
I cannot reproduce this bug myself, and I don't understand how NFS could
be the cause of this.  My attempt to reproduce it was:

  $ virt-builder rhel-7.1 --format=qcow2
  $ scp rhel-7.1.qcow2 nfs-server:/mnt

then from another machine which has the nfs server mounted:

  $ virt-inspector -a /mnt/rhel-7.1.qcow2

However this worked fine.

It seems to me most likely that the RPM database in this particular
guest is corrupt.  (It may not be the __db.* files, it may be the
actual database files like Packages etc.)

If the guest is bootable, you might try logging into it and doing:

  rpm --rebuilddb

Comment 3 Richard W.M. Jones 2022-04-26 12:38:22 UTC
Xiaodai, if this bug happens with RHEL 9 host then I'd like to move it
to RHEL 9.

For RHEL 8 (since I couldn't reproduce it) I suggest if it only happens
in RHEL 8 and NOT in RHEL 9, then we should just close it.

If it's not reproducible at all, then also close it.

Comment 4 Xiaodai Wang 2022-05-05 08:45:16 UTC
(In reply to Richard W.M. Jones from comment #3)
> Xiaodai, if this bug happens with RHEL 9 host then I'd like to move it
> to RHEL 9.
> 
> For RHEL 8 (since I couldn't reproduce it) I suggest if it only happens
> in RHEL 8 and NOT in RHEL 9, then we should just close it.
> 
> If it's not reproducible at all, then also close it.

Yes, This issue can not be reproduced in RHEL9.
Because it's a minor issue and has a workaround, I agree that we can
close it. Thanks.

Comment 5 Richard W.M. Jones 2022-05-05 08:56:43 UTC
Closing in RHEL 8.  If the bug starts happening in RHEL 9, please open a bug about that.


Note You need to log in before you can comment on or make changes to this bug.