RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1245410 - rpm command stops working on large systems
Summary: rpm command stops working on large systems
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libdb
Version: 7.2
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: 7.2
Assignee: Jan Staněk
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On:
Blocks: 1118366 1185045 1200716 1252514
TreeView+ depends on / blocked
 
Reported: 2015-07-22 02:57 UTC by George Beshers
Modified: 2015-11-19 14:51 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Due to the way memory was managed in libdb, the rpm command did not work on systems with large number of CPUs. This update changes the way memory management takes into account the number of CPUs. As a result, the rpm command works on large systems as expected.
Clone Of:
Environment:
Last Closed: 2015-11-19 14:51:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Limit libdb's 'understanding' to 1024cpus in function __os_cpu_count() (381 bytes, patch)
2015-08-24 20:50 UTC, George Beshers
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 129434 0 None None None 2019-06-10 12:14:01 UTC
Red Hat Product Errata RHBA-2015:2163 0 normal SHIPPED_LIVE libdb bug fix update 2015-11-19 11:53:56 UTC

Internal Links: 1664031

Description George Beshers 2015-07-22 02:57:24 UTC
Description of problem:

   On large systems, typically >=1024 cpus, the rpm command fails.

[root@harp31-sys ~]# rpm -qa
error: db5 error(-30973) from dbenv->open: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 -  (-30973)
error: cannot open Packages database in /var/lib/rpm
error: db5 error(-30973) from dbenv->open: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm

Test1: reboot to single blade and 'rpm -qa' works.  Therefore
    something is failing to generate in the multiprocessor
    boot sequence (much more likely than shutdown).

Test2: cp __db00[123] files from backup to /var/lib/rpm now
     'rpm -qa' works.  There is no data corruption per-se.
     I did wonder if the __db00* files are being removed in the
     shutdown sequence, but this is not the case.

Nate tracked it down to:

It looks like the issue is in compat-db 4.7.25

On a large system dbenv->lk_partitions is set cpu * 10. 
Later in __lock_region_init the program allocates a few objects per lk_partition and one of those petty allocations eventually failed.

int
__lock_env_create(dbenv)
        DB_ENV *dbenv;
{
     ...
        /*
         * Default to 10 partitions per cpu.  This seems to be near
         * the point of diminishing returns on Xeon type processors.
         * Cpu count often returns the number of hyper threads and if
         * there is only one CPU you probably do not want to run partitions.
         */
        cpu = __os_cpu_count();
        dbenv->lk_partitions = cpu > 1 ? 10 * cpu : 1;

        return (0);
}


Version-Release number of selected component (if applicable):

    Also fails in 7.1 and current 7.2 development.


How reproducible:

    Always on a large enough system.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Russ Anderson 2015-07-22 14:36:39 UTC
Some additional data: This works up to 1024 cpus, but fails starting around
1200 cpus (depending on when the memory allocation fails).

RHEL6 does not have this problem because it does not use compat-db.  Likewise
other vendors (at least the ones we checked) do not use compat-db and therefor
do not hit this problem.

Comment 6 George Beshers 2015-08-24 20:48:15 UTC
This is actually a problem with libdb and appears to be a problem
with BerkelyDB maintained by Oracle.

The problem is that the number of cpus is used to calculate the desirable
number of mutexes to allocate -- noone was expecting systems larger than
1024 cpus when the code was written.

The mutexes are allocated in a region of memory, which is limited to 30588928
and this gets used up.  I tried fiddling with upping the __db_region::max
field but something else went wrong --- I *think* there is an assumption
about region size built into an existing database.  However, when I tried
changing the regions size and using initdb that didn't work either and I
dropped it.

The simplest workaround is simply to limit the number of cpus the database sees
in the function __os_cpu_count().

Comment 7 George Beshers 2015-08-24 20:50:29 UTC
Created attachment 1066630 [details]
Limit libdb's 'understanding' to 1024cpus in function __os_cpu_count()

Comment 8 Honza Horak 2015-08-25 05:39:46 UTC
Good work George, do you have any feedback whether this fixes the issues in customer's case?

Comment 10 George Beshers 2015-08-25 15:20:15 UTC
Hi Honza,

Did you ask a question in a hidden comment?

George

Comment 11 Prarit Bhargava 2015-08-25 16:50:54 UTC
hannsj_uhl.com -- are you seeing this as well?

P.

Comment 12 George Beshers 2015-08-25 17:11:14 UTC
Prarit,

I think there is a question that I can't see otherwise
I don't understand what the needinfo was reset for after
I provided a workaround patch.

George

Comment 13 Joseph Kachuck 2015-08-26 13:50:16 UTC
Hello George,
You do have any feedback if this fixes the client issue?

Thank You
Joe Kachuck

Comment 14 George Beshers 2015-08-26 14:03:20 UTC
The patch included works around the problem.
All it does is limit the number of cpus seen by libdb
to 1024 which I believe means just SGI systems.
This allows rpm & yum to work properly with existing
rpm databases -- an important point as packages are
often installed when the systems are booted to a single
blade.

A real fix is more involved and much riskier.
Also, the problem exists in the latest BerkeleyDB from
Oracle.

======

An interesting question going forward is what database
will rpm use in rhel8.

Comment 15 Prarit Bhargava 2015-08-26 14:08:17 UTC
This is a blocker for 7.2.  SGI & Red Hat have long support contracts with customers for greater than 1024 processor boxes.  We cannot break rpm.

P.

Comment 16 Honza Horak 2015-08-28 05:46:50 UTC
(In reply to George Beshers from comment #10)
> Did you ask a question in a hidden comment?

Sorry, my mistake. Please, see comment #8, which simply asks whether we have any evidence it really solve the issue? We're not having a machine like this, so won't be able to reproduce.

Comment 17 Honza Horak 2015-08-28 05:58:53 UTC
Also, if some testing builds help, we can provide some.

Comment 18 Jan Staněk 2015-08-28 07:09:33 UTC
Problem reported upstream [1], maybe they will have something to say.

[1] https://community.oracle.com/message/13274780#13274780

Comment 19 Prarit Bhargava 2015-08-28 11:55:02 UTC
(In reply to Honza Horak from comment #16)
> (In reply to George Beshers from comment #10)
> > Did you ask a question in a hidden comment?
> 
> Sorry, my mistake. Please, see comment #8, which simply asks whether we have
> any evidence it really solve the issue? We're not having a machine like
> this, so won't be able to reproduce.

George definitely has a machine like this ;) so he'll be able to test.  If you can do a test rpm for him he can grab it directly from the brew link.

P.

Comment 20 Frank Ramsay (HPE) 2015-08-28 12:47:56 UTC
I'm the other partner engineer for SGI, we can definitely test this if given an update.

Comment 21 Russ Anderson 2015-08-28 15:32:34 UTC
We have a system ready to test on.  When you have a test rpm just point us
at the brew link.  Thanks.

------------------------------------------
[root@harp31-sys ~]# yum clean all
error: db5 error(-30973) from dbenv->open: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 -  (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:

Error: rpmdb open failed
[root@harp31-sys ~]# topology
System type: UV3000
System name: harp31-sys
Serial number: UV3-00000031
Partition number: 0
      32 Blades
    2048 CPUs (online: 0-2047)
      64 Nodes
 7809.94 GB Memory Total
  124.00 GB Max Memory on any Node
       1 BASE I/O Riser
       2 PCIe Slots
       2 Network Controllers
       2 Storage Controllers
       3 USB Controllers
       2 VGA GPUs

Comment 22 Jan Staněk 2015-08-31 12:12:34 UTC
OK, I built the latest libdb (for 7.2 devel) in brew with applied patch. The task URL is [1]. Please let me know if you need anything else.

[1] http://brewweb.devel.redhat.com/brew/taskinfo?taskID=9767941

Comment 23 Jan Staněk 2015-08-31 12:59:59 UTC
And to remedy my mistake of providing internal-only link to the build, I copied the test packages to [1]. To be clear, these are test packages and should not be taken as final fix.

[1] https://jstanek.fedorapeople.org/libdb/

Comment 24 George Beshers 2015-08-31 18:41:24 UTC
Hi Jan,

First, I have access to brew builds (as does Frank Ramsay),
onsite engineers usually do.

Second, we have tested the rpms and they work on UV2 system
with 2048 cpus and your rpms solve the problem :).

Cheers,
George

Comment 25 George Beshers 2015-08-31 18:52:06 UTC
Will these rpms make the 7.2 beta release?

Comment 26 Honza Horak 2015-09-01 06:13:31 UTC
(In reply to George Beshers from comment #25)
> Will these rpms make the 7.2 beta release?

Unfortunately not, we're too late for 7.2 Beta.

Comment 27 Leos Pol 2015-09-03 10:35:49 UTC
Granting qa_ack. We have no such hw, so we'll do sanityonly and use verification from comment 24.

Comment 30 Leos Pol 2015-09-08 08:14:11 UTC
RHQE - SanityOnly
Verified by OtherQE (comment 24)

Comment 31 errata-xmlrpc 2015-11-19 14:51:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2163.html


Note You need to log in before you can comment on or make changes to this bug.