RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1335618 - Server ram sanity checks work in isolation
Summary: Server ram sanity checks work in isolation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Noriko Hosoi
QA Contact: Viktor Ashirov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-12 16:30 UTC by Noriko Hosoi
Modified: 2020-09-13 21:39 UTC (History)
3 users (show)

Fixed In Version: 389-ds-base-1.3.5.4-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 20:42:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 1785 0 None None None 2020-09-13 21:39:54 UTC
Red Hat Product Errata RHSA-2016:2594 0 normal SHIPPED_LIVE Moderate: 389-ds-base security, bug fix, and enhancement update 2016-11-03 12:11:08 UTC

Description Noriko Hosoi 2016-05-12 16:30:17 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/48617

Follow on from #48384

Now that we have working ram and sanity checks, we need to make sure they work together properly.

Right now, as the server starts we are checking the backends in isolation.

Lets say we have 1GB of ram. We have two backends, that each request 768MB.

Both would pass the sanity check, as they check the free memory at the time, independent of each other. Then both start and over time would allocate up to 1.5GB potentially causing swap or worse OOM conditions.

Our backend sizing checks should check the sum of all backends and all caches together, before declaring "sane".

This will have to be done pre-backend init.

I propose:

* Remove the "fuzzy" second backend check that doesn't really work. It's not concrete.
* Move the db sizing checks out from ldbm plugins.
* Expose a plugin function to return the cache sizings and values. We should also have backends expose their dn and entry cache sizes.
* Alter the server start up procedure such that:

{{{
start slapd
load the plugins (but don't start the DB's / backend yet)
trigger each backend to "attempt" the autosizing procedure based on the admin's rules.
retrieve *all* the cache sizings
validate the sum of all caches
iff valid:
    start the backends and dbs
else:
    stop the server with lots of big warnings, and the complete set of information related to memory and cache sizings.
}}}

Comment 1 Noriko Hosoi 2016-05-12 16:31:50 UTC
For the verification, please see this comment and give insane config parameters and check the error logs.
https://fedorahosted.org/389/ticket/48617#comment:8

Comment 3 Simon Pichugin 2016-08-01 17:20:37 UTC
Build tested:
389-ds-base-1.3.5.10-5.el7.x86_64

Verification steps:
1. Install Directory Server instance
2. Create one more backend via console
3. Stop the instance
4. Edit dse.ldif file. Change nsslapd-cachememsize attribute on both backends. Sum of the nsslapd-cachememsize values should be bigger then total RAM on the machine
5. Start the instance
6. Check logs

[01/Aug/2016:19:14:24.327625010 +0200] CRITICAL: It is highly likely your memory configuration of all backends will EXCEED your systems memory.
[01/Aug/2016:19:14:24.339033562 +0200] CRITICAL: In a future release this WILL prevent server start up. You MUST alter your configuration.
[01/Aug/2016:19:14:24.344020105 +0200] Total entry cache size: 282638208 B; dbcache size: 10000000 B; available memory size: 246763520 B;
[01/Aug/2016:19:14:24.348845854 +0200] This can be corrected by altering the values of nsslapd-dbcachesize, nsslapd-cachememsize and nsslapd-dncachememsize

Logs have appropriate messages. Marking as verified.

Comment 5 errata-xmlrpc 2016-11-03 20:42:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2594.html


Note You need to log in before you can comment on or make changes to this bug.