RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1609057 - IdM's dirsrv service "hung"
Summary: IdM's dirsrv service "hung"
Keywords:
Status: CLOSED DUPLICATE of bug 1435663
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: slapi-nis
Version: 7.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Alexander Bokovoy
QA Contact: ipa-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-26 20:33 UTC by Marc Sauton
Modified: 2021-12-10 16:54 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-18 16:05:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FREEIPA-7558 0 None None None 2021-12-10 16:54:17 UTC

Description Marc Sauton 2018-07-26 20:33:57 UTC
Description of problem:

from sf case number 02148976 (RH-IT)
"
The dirsrv service of our IdM servers were all hanging at the same time. I assume it was replication of data. I had to kill -9 the dirsrv service, after which things began to function as normal
"

->
the 389-ds "hang" situation happened on all 4 IPA LDAP servers during a "mass" import of 1500+ groups
4 IPA replica:
nsds50ruv: {replica 8 ldap://idm03.core.dev.int.phx1.redhat.com:389} 5a722845000000080000 5b305961003900080000
nsds50ruv: {replica 4 ldap://idm-admin.core.dev.int.phx1.redhat.com:389} 5a72232a000100040000 5b305962000100040000
nsds50ruv: {replica 3 ldap://idm01.core.dev.int.phx1.redhat.com:389} 5a722330000000030000 5b305961003600030000
nsds50ruv: {replica 11 ldap://idm02.core.dev.int.phx1.redhat.com:389} 5a72394a0000000b0000 5b305961003b000b0000


a pstack shows some SASL bind for incremental updates, waiting for a result.


attaching a stack trace for review, short version:

Thread 90
pthread_cond_wait pthread_cond_wait [IDLE worker_thread_func start_thread clone

...

Thread 59
pthread_cond_wait pthread_cond_wait [IDLE worker_thread_func start_thread clone

Thread 58
epoll_wait epoll_wait epoll_dispatch event_base_loop ns_event_fw_loop event_loop_thread_func start_thread clone

Thread 49
poll poll poll ldap_int_select wait4msg ldap_result ldap_sasl_interactive_bind_s slapd_ldap_sasl_interactive_bind slapi_ldap_bind bind_and_check_pwp conn_connect acquire_replica repl5_inc_run prot_thread_main _pt_root start_thread clone

Thread 48
poll poll poll ldap_int_select wait4msg ldap_result ldap_sasl_interactive_bind_s slapd_ldap_sasl_interactive_bind slapi_ldap_bind bind_and_check_pwp conn_connect acquire_replica repl5_inc_run prot_thread_main _pt_root start_thread clone

Thread 45
poll poll poll ldap_int_select wait4msg ldap_result ldap_sasl_interactive_bind_s slapd_ldap_sasl_interactive_bind slapi_ldap_bind bind_and_check_pwp conn_connect acquire_replica repl5_inc_run prot_thread_main _pt_root start_thread clone

Thread 44
poll poll poll ldap_int_select wait4msg ldap_result ldap_sasl_interactive_bind_s slapd_ldap_sasl_interactive_bind slapi_ldap_bind bind_and_check_pwp conn_connect acquire_replica repl5_inc_run prot_thread_main _pt_root start_thread clone

Thread 1
pthread_join pthread_join ns_thrpool_wait slapd_daemon main



Version-Release number of selected component (if applicable):

389-ds-base-1.3.7.5-21.el7_5.x86_64
ipa-server-4.5.4-10.el7_5.1.x86_64
redhat-release-server-7.5-8.el7.x86_64


How reproducible:
was a one time event so far.

Steps to Reproduce:
1. N/A
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Marc Sauton 2018-07-26 20:37:16 UTC
it was suggested to collect pstacks from all nodes, and to try disable numc-stans  as a workaround to try.

Comment 33 German Parente 2018-10-18 16:05:22 UTC

*** This bug has been marked as a duplicate of bug 1435663 ***


Note You need to log in before you can comment on or make changes to this bug.