Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
https://www.mail-archive.com/openldap-its@openldap.org/msg06455.htmlhttp://upstream-tracker.org/changelogs/openldap/2.4.26/changelog.html
Description of problem:
slapd crashed with the following log:
XXX kernel: slapd[18013]: segfault at 0 ip 00007f1ab9b35292 sp 00007f1a9ff13c20 error 4 in slapd[7f1ab9acb000+203000]
When I look at the backtrace from coredump, it has stopped in syncprov overlay.
(gdb) bt
#0 test_filter (op=0x7f0ec09ffff0, e=0x7f0edc0aa6a8, f=0x0)
at ../../../servers/slapd/filterentry.c:69
#1 0x00007f0ed5fb8ba1 in syncprov_matchops (op=0x7f0ec0a01310, opc=0x7f0eb40009c8,
saveit=1) at ../../../../servers/slapd/overlays/syncprov.c:1312
#2 0x00007f0ed5fb8f9c in syncprov_op_mod (op=0x7f0ec0a01310,
rs=<value optimized out>) at ../../../../servers/slapd/overlays/syncprov.c:2078
#3 0x00007f0edaa8488a in overlay_op_walk (op=0x7f0ec0a01310, rs=0x7f0ec0a006b0,
which=op_modify, oi=0x7f0edbecaf40, on=0x7f0edbecb610)
at ../../../servers/slapd/backover.c:659
Version-Release number of selected component (if applicable):
- RHEL6.5
- kernel-2.6.32-279.el6.x86_64
- openldap-*-2.4.23-34.el6_5.1.x86_64 (latest)
How reproducible:
It occurs frequently
Step reproducer:
It is proved only to enable syncprov overlay.
The other reproducer is not known.
Actual results:
slapd crashed by segfault while syncrepl processing
Jun 12 16:03:31 XXX kernel: slapd[18013]: segfault at 0 ip 00007f1ab9b35292 sp 00007f1a9ff13c20 error 4 in slapd[7f1ab9acb000+203000]
Jun 12 16:22:59 XXX kernel: slapd[18101]: segfault at 0 ip 00007fb73e9dc292 sp 00007fb724dbac20 error 4 in slapd[7fb73e972000+203000]
Jun 12 17:20:37 XXX kernel: slapd[19223]: segfault at 0 ip 00007f9a858f3292 sp 00007f9a6bcd1c20 error 4 in slapd[7f9a85889000+203000]
Jun 13 12:59:00 XXX kernel: slapd[9999]: segfault at 0 ip 00007f9785de6922 sp 00007f97663a4c20 error 4 in slapd[7f9785d7c000+203000]
# gdb -c slapd-core
Core was generated by `/usr/sbin/slapd -h ldaps:/// -u ldap'.
Program terminated with signal 11, Segmentation fault.
#0 test_filter (op=0x7f0ec09ffff0, e=0x7f0edc0aa6a8, f=0x0)
at ../../../servers/slapd/filterentry.c:69
Expected results:
Segfault does not occur while slapd is processing syncrepl overlay
Additional info:
I seem that it is the same as the problem reported by openldap's ML"(ITS#6862)"
http://www.openldap.org/lists/openldap-bugs/200903/msg00046.htmlhttps://www.mail-archive.com/openldap-its@openldap.org/msg06455.htmlhttp://upstream-tracker.org/changelogs/openldap/2.4.26/changelog.html
According to the report of this contributor, the problem did not reproduce after using the latest syncprov.c at the time of "June 16, 2011".
(gdb) bt
#0 test_filter (op=0x7f0ec09ffff0, e=0x7f0edc0aa6a8, f=0x0)
at ../../../servers/slapd/filterentry.c:69
#1 0x00007f0ed5fb8ba1 in syncprov_matchops (op=0x7f0ec0a01310, opc=0x7f0eb40009c8,
saveit=1) at ../../../../servers/slapd/overlays/syncprov.c:1312
#2 0x00007f0ed5fb8f9c in syncprov_op_mod (op=0x7f0ec0a01310,
rs=<value optimized out>) at ../../../../servers/slapd/overlays/syncprov.c:2078
#3 0x00007f0edaa8488a in overlay_op_walk (op=0x7f0ec0a01310, rs=0x7f0ec0a006b0,
which=op_modify, oi=0x7f0edbecaf40, on=0x7f0edbecb610)
at ../../../servers/slapd/backover.c:659
#4 0x00007f0edaa853cb in over_op_func (op=0x7f0ec0a01310, rs=<value optimized out>,
which=<value optimized out>) at ../../../servers/slapd/backover.c:721
#5 0x00007f0edaa78a0b in syncrepl_entry (si=<value optimized out>,
op=0x7f0ec0a01310, entry=<value optimized out>, modlist=0x7f0ec0a01328,
syncstate=<value optimized out>, syncUUID=<value optimized out>, syncCSN=0x0)
at ../../../servers/slapd/syncrepl.c:2603
#6 0x00007f0edaa7dde4 in do_syncrep2 (op=0x7f0ec0a01310, si=0x7f0edbecdb00)
at ../../../servers/slapd/syncrepl.c:941
#7 0x00007f0edaa80b4e in do_syncrepl (ctx=<value optimized out>, arg=0x7f0edbec8fe0)
at ../../../servers/slapd/syncrepl.c:1434
#8 0x00007f0edaa19661 in connection_read_thread (ctx=0x7f0ec0a01b70,
argv=<value optimized out>) at ../../../servers/slapd/connection.c:1247
#9 0x00007f0edab18dd8 in ldap_int_thread_pool_wrapper (xpool=0x7f0edbe050a0)
at ../../../libraries/libldap_r/tpool.c:685
#10 0x00007f0ed899f851 in start_thread (arg=0x7f0ec0a02700) at pthread_create.c:301
#11 0x00007f0ed84e267d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
(gdb) print *ss
$4 = {s_next = 0x0, s_base = {bv_len = 27,
bv_val = 0x7f0eb4383300 "dc=axa-direct-jp,dc=intraxa"}, s_eid = 1,
s_op = 0x7f0eb43e2a60, s_rid = 1, s_sid = 2, s_filterstr = {bv_len = 15,
bv_val = 0x7f0eb4000ba0 "(objectClass=*)"}, s_flags = 1, s_inuse = 1,
s_res = 0x0, s_restail = 0x0, s_mutex = {__data = {__lock = 0, __count = 0,
__owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0,
__next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}}