RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1614820 - 389-ds-base: Crash in vslapd_log_emergency_error [rhel-7.6]
Summary: 389-ds-base: Crash in vslapd_log_emergency_error [rhel-7.6]
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.7-Alt
Hardware: All
OS: Linux
Target Milestone: rc
: ---
Assignee: mreynolds
QA Contact: RHDS QE
Marc Muehlfeld
: 1623721 (view as bug list)
Depends On:
Blocks: CVE-2018-14624 1623247
TreeView+ depends on / blocked
Reported: 2018-08-10 13:40 UTC by German Parente
Modified: 2022-03-13 15:22 UTC (History)
10 users (show)

Fixed In Version: 389-ds-base-
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1623247 (view as bug list)
Last Closed: 2018-10-30 10:15:00 UTC
Target Upstream Version:

Attachments (Terms of Use)
core_backtrace of the coredump (1.25 MB, text/plain)
2018-08-10 13:48 UTC, German Parente
no flags Details
another stacktrace. (317.43 KB, text/plain)
2018-08-10 13:48 UTC, German Parente
no flags Details

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:3127 0 None None None 2018-10-30 10:16:12 UTC

Description German Parente 2018-08-10 13:40:54 UTC
Description of problem:

we can see a crash in directry with this stacktrace:

(gdb) bt
#0  0x00007f748a9c1373 in PR_Write (fd=0x55683c42e4e0, buf=0x7f742b404bd0, amount=107) at ../../../nspr/pr/src/io/priometh.c:114
#1  0x000055683a6458a5 in slapi_write_buffer (fd=<optimized out>, buf=<optimized out>, amount=<optimized out>) at ldap/servers/slapd/fileio.c:48
#2  0x00007f748ca6c66b in vslapd_log_emergency_error (fp=0x55683c42e4e0, msg=0x7f748cade1a8 "Insufficent buffer capacity to fit timestamp and message!", locked=0) at ldap/servers/slapd/log.c:2260
#3  0x00007f748ca73779 in vslapd_log_access (fmt=fmt@entry=0x7f748cadf994 "conn=%lu op=%d MOD dn=\"%s\"%s\n", ap=ap@entry=0x7f742b4064a0) at ldap/servers/slapd/log.c:2535
#4  0x00007f748ca74971 in slapi_log_access (level=level@entry=256, fmt=fmt@entry=0x7f748cadf994 "conn=%lu op=%d MOD dn=\"%s\"%s\n") at ldap/servers/slapd/log.c:2568
#5  0x00007f748ca7c15f in op_shared_modify (pb=pb@entry=0x55683f960ea0, pw_change=pw_change@entry=0, old_pw=0x0) at ldap/servers/slapd/modify.c:668
#6  0x00007f748ca7e05b in do_modify (pb=pb@entry=0x55683f960ea0) at ldap/servers/slapd/modify.c:391
#7  0x000055683a63d2ee in connection_dispatch_operation (pb=0x55683f960ea0, op=0x55685ff5aa80, conn=0x55683e872780) at ldap/servers/slapd/connection.c:625
#8  0x000055683a63d2ee in connection_threadmain () at ldap/servers/slapd/connection.c:1785
#9  0x00007f748a9db9bb in _pt_root (arg=0x55683f95ed00) at ../../../nspr/pr/src/pthreads/ptthread.c:216
#10 0x00007f748a37be25 in start_thread (arg=0x7f742b407700) at pthread_create.c:396
#11 0x00007f7489c5d34d in lsetxattr () at ../sysdeps/unix/syscall-template.S:81
#12 0x0000000000000000 in None ()

Version-Release number of selected component (if applicable): 389-ds-base-

How reproducible: not easily.

Comment 3 German Parente 2018-08-10 13:48:22 UTC
Created attachment 1475048 [details]
core_backtrace of the coredump

Comment 4 German Parente 2018-08-10 13:48:53 UTC
Created attachment 1475049 [details]
another stacktrace.

Comment 6 mreynolds 2018-08-17 20:03:12 UTC
I can not reproduce the issue.  I can get the error to be logged by adding an entry with a really large DN:

[17/Aug/2018:15:56:33.905658222 -0400]  - EMERG - Insufficent buffer capacity to fit timestamp and message!

But no crash.  The core dump shows the error log FD is corrupted and trying to write to it fails, but I don't know how that happened.   I think this is a case where we need valgrind or an ASAN build to make any progress on this.

Comment 15 Viktor Ashirov 2018-08-30 13:00:56 UTC
Build tested: 389-ds-base-

Reproducer from https://bugzilla.redhat.com/show_bug.cgi?id=1614820#c7 no longer crashes the server, error messages are formatted correctly:

[30/Aug/2018:12:59:44.570829721 +0000]  - EMERG - Insufficent buffer capacity to fit timestamp and message!
[30/Aug/2018:12:59:44.593972036 +0000]  - EMERG - Insufficent buffer capacity to fit timestamp and message!
[30/Aug/2018:12:59:44.610915031 +0000]  - EMERG - Insufficent buffer capacity to fit timestamp and message!

Marking as VERIFIED.

Comment 19 Tomas Hoger 2018-09-21 20:34:01 UTC
*** Bug 1623721 has been marked as a duplicate of this bug. ***

Comment 22 errata-xmlrpc 2018-10-30 10:15:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.