RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1831812 - A MODRDN operation on a user in a large static group can consume all available DB locks.
Summary: A MODRDN operation on a user in a large static group can consume all availabl...
Keywords:
Status: CLOSED DUPLICATE of bug 1812286
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: 389-ds-base
Version: 8.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 8.0
Assignee: Simon Pichugin
QA Contact: RHDS QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-05 16:57 UTC by Têko Mihinto
Modified: 2023-09-07 23:03 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-20 12:19:58 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 5102151 0 None None None 2021-02-10 16:52:27 UTC

Description Têko Mihinto 2020-05-05 16:57:10 UTC
Description of problem:
* A customer has about 250K entries in a large static group.

* The MemberOf Plugin is enabled and is configured to process all suffixes.

* The number of DB locks is set to 100K
nsslapd-db-locks: 100000

* The Referential Integrity Plugin is enabled and configured as below:
==================================================
dn: cn=referential integrity postoperation,cn=plugins,cn=config
objectClass: top
objectClass: nsSlapdPlugin
objectClass: extensibleObject
cn: referential integrity postoperation
nsslapd-pluginPath: libreferint-plugin
nsslapd-pluginInitfunc: referint_postop_init
nsslapd-pluginType: betxnpostoperation
nsslapd-pluginEnabled: on
nsslapd-pluginprecedence: 40
referint-update-delay: 0
referint-logfile: /var/log/dirsrv/slapd-<INSTANCE_NAME>/referint
referint-membership-attr: member
referint-membership-attr: uniquemember
referint-membership-attr: owner
referint-membership-attr: seeAlso
referint-membership-attr: memberuid
nsslapd-plugin-depends-on-type: database
nsslapd-pluginId: referint
nsslapd-pluginVersion: 1.3.9.1
nsslapd-pluginVendor: 389 Project
nsslapd-pluginDescription: referential integrity plugin
...
==================================================


Version-Release number of selected component (if applicable):
Customer RHEL and 389-ds versions:
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.7 (Maipo)
$ grep 389-ds-base-1 installed-rpms
389-ds-base-1.3.9.1-12.el7_7.x86_64                         Fri Dec  6 13:05:57 2019
$


How reproducible:
Always when the Referential Integrity Plugin is enabled.
Not reproducible if RI plugin is disabled.

Steps to Reproduce:
1. Mimic the customer configuration
2. Run a MODRDN operation on a user in the large static group.
3. Monitor the number of DB locks and check the errors log


Actual results:
The LDAP instance runs out of DB locks:
==================================================
...
[05/May/2020:14:03:10.389741470 +0200] - ERR - id2entry - db error 12 (Cannot allocate memory)
[05/May/2020:14:03:10.391192839 +0200] - ERR - ldbm_back_next_search_entry_ext - next_search_entry db err 12
[05/May/2020:14:03:10.392725298 +0200] - ERR - libdb - BDB2055 Lock table is out of available lock entries
[05/May/2020:14:03:10.394284112 +0200] - ERR - id2entry - db error 12 (Cannot allocate memory)
[05/May/2020:14:03:10.395875506 +0200] - ERR - ldbm_back_next_search_entry_ext - next_search_entry db err 12
[05/May/2020:14:03:10.397305021 +0200] - ERR - libdb - BDB2055 Lock table is out of available lock entries
...
==================================================


The server might also become unresponsive and could not be stopped gracefully.


Expected results:
Either
* Monitor the number of current DB locks and exits gracefully when getting close to the limit
or
* Automatically adjust the upper limit ( more complicated as a restart is required to take the new value into account ).


Additional info:
* RFE - Monitor the current DB locks ( nsslapd-db-current-locks ).
    https://bugzilla.redhat.com/show_bug.cgi?id=1812286

* This behavior is somehow expected when dealing with large groups:
    https://www.redhat.com/archives/freeipa-users/2016-February/msg00358.html

* Some customers are experiencing DB corruption issue forcing them to reinitialize their large DB across the topology.

Comment 5 Ludwig 2020-05-06 07:38:45 UTC
could you enable logging of internal operations.
I have the suspicion that one of the plugins does some extensive searches and they are done in the txn of the modrdn operation

Comment 7 Ludwig 2020-05-06 08:50:38 UTC
Thanks Teko, sorry I had missed c2.

so this shows two things:
- memberof does a search of all backends, not sure if this is necessary or can be prvented by config (Thierry?)
- it does substring searches for member, uniquemember, owner, seealso, memberuid - and I don't think they all have substring indexes. Is it necessary to include owner and seealso in the member attrs ?

Comment 8 thierry bordaz 2020-05-06 09:18:08 UTC
indeed config parameter memberOfAllBackends is likely turned 'on' (default is 'off') that explain all backend lookup. Is it necessary, it depends on how data are spread on backends. Also in such case substring index must be tuned on all backends. In case they are not indexed it leads to an unindexed search under a txn !

Comment 14 Simon Pichugin 2021-04-20 12:19:58 UTC
The change will be provided in bz1812286.

Closing this as duplicate.

*** This bug has been marked as a duplicate of bug 1812286 ***


Note You need to log in before you can comment on or make changes to this bug.