This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2231559 - CRIT - connection_table_move_connection_out_of_active_list - conn 0 is already OUT of the active list (refcnt is 0)
Summary: CRIT - connection_table_move_connection_out_of_active_list - conn 0 is alread...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: 389-ds-base
Version: 9.3
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: LDAP Maintainers
QA Contact: LDAP QA Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-11 21:33 UTC by Viktor Ashirov
Modified: 2023-08-30 14:54 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-30 14:54:49 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
pstack after the test was stopped (18.81 KB, text/plain)
2023-08-12 11:37 UTC, Viktor Ashirov
no flags Details
pstack of unresponsive server (25.78 KB, text/plain)
2023-08-12 11:38 UTC, Viktor Ashirov
no flags Details
pstack with turbo mode disabled (18.78 KB, text/plain)
2023-08-16 12:02 UTC, Viktor Ashirov
no flags Details
pstack with turbo mode removed (18.75 KB, text/plain)
2023-08-16 13:49 UTC, Viktor Ashirov
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 5909 0 None open Multi listener hang with 20k connections 2023-08-29 07:10:39 UTC
Red Hat Issue Tracker   RHEL-1837 0 None Migrated None 2023-08-30 14:54:43 UTC
Red Hat Issue Tracker RHELPLAN-165691 0 None None None 2023-08-11 21:34:05 UTC

Description Viktor Ashirov 2023-08-11 21:33:04 UTC
Description of problem:
While running https://github.com/389ds/389-ds-base/blob/main/dirsrvtests/tests/perf/ltest.py with nsslapd-numlisteners=4, test hangs with:
20000 connections are open. Starting the latency test using Namespace(test_duration=None, wait_time=10, uri='ldap://192.168.122.63:389', basedn='ou=people, dc=example, dc=com', binddn='cn=directory manager', bindpw='password', nbconn=20000, verbose=0) as parameters

After test is stopped, CRIT messages are logged:
[11/Aug/2023:16:11:41.072307556 -0400] - CRIT - connection_table_move_connection_out_of_active_list - conn 0 is already OUT of the active list (refcnt is 0)
[11/Aug/2023:16:11:41.396395450 -0400] - CRIT - connection_table_move_connection_out_of_active_list - conn 0 is already OUT of the active list (refcnt is 0)
[11/Aug/2023:16:11:41.919739558 -0400] - CRIT - connection_table_move_connection_out_of_active_list - conn 0 is already OUT of the active list (refcnt is 0)
[11/Aug/2023:16:11:46.958568482 -0400] - CRIT - connection_table_move_connection_out_of_active_list - conn 0 is already OUT of the active list (refcnt is 0)

With 4 listeners and smaller number of connections test is successful.

Version-Release number of selected component (if applicable):
389-ds-base-2.3.4-3.el9.x86_64

Comment 1 Viktor Ashirov 2023-08-12 11:37:11 UTC
Created attachment 1983126 [details]
pstack after the test was stopped

Comment 2 Viktor Ashirov 2023-08-12 11:38:32 UTC
Created attachment 1983127 [details]
pstack of unresponsive server

I left the server running after the test was stopped. After some time I found it unresponsive and consuming 100% CPU.

Comment 3 Jamie Chapman 2023-08-15 10:45:42 UTC
Hi Viktor, 

I would be very interested to know the size of the connection sub tables on the system you used for this issue.

cat  /var/log/dirsrv/[YOUR INSTANCE]/errors | grep connection_table_new
eg
[15/Aug/2023:06:32:52.962909140 -0400] - INFO - connection_table_new - Number of connection sub-tables 4, each containing 15984 slots.

Thanks

Comment 4 Viktor Ashirov 2023-08-15 10:58:26 UTC
Hi Jamie,

I have the same number:
[11/Aug/2023:17:08:23.654787326 -0400] - INFO - connection_table_new - Number of connection sub-tables 4, each containing 15984 slots.

Thanks

Comment 5 thierry bordaz 2023-08-16 11:31:47 UTC
Looking at 'pstack of the unresponsive server', it is looking like the test of the turbo mode may contribute. For example all workers may entered in turbo mode and entering in infinite loop to evaluate its rank. Suprisingly all connection should be close so active list should be empty.

Thread 27 (Thread 0x7f9b309ff640 (LWP 24356) "ns-slapd"):
#0  table_iterate_function (arg=<synthetic pointer>, conn=<optimized out>) at ldap/servers/slapd/connection.c:1410
#1  connection_table_iterate_active_connections (f=<optimized out>, arg=<synthetic pointer>, ct=0x7f9bb9c32870) at ldap/servers/slapd/conntable.c:343
#2  connection_find_our_rank (conn=<optimized out>, conn=0x7f9b3b90ed70, our_rank=<synthetic pointer>, connection_count=<synthetic pointer>) at ldap/servers/slapd/connection.c:1425
#3  connection_enter_leave_turbo (new_turbo_flag=<synthetic pointer>, current_turbo_flag=0, conn=0x7f9b3b90ed70) at ldap/servers/slapd/connection.c:1458
#4  connection_threadmain (arg=<optimized out>) at ldap/servers/slapd/connection.c:1661
#5  0x00007f9bbede9c34 in _pt_root () at target:/lib64/libnspr4.so
#6  0x00007f9bbe69f822 in start_thread () at target:/lib64/libc.so.6
#7  0x00007f9bbe63f450 in clone3 () at target:/lib64/libc.so.6

If reproduced, samples of top -H may say if only one worker is active or all of them (more likely).
Could you try to reproduce with turbo mode disabled ?

Comment 6 Viktor Ashirov 2023-08-16 12:02:16 UTC
Created attachment 1983591 [details]
pstack with turbo mode disabled

With turbo mode disabled the server still hangs.

Comment 7 Viktor Ashirov 2023-08-16 12:03:10 UTC
I'll make another build with https://github.com/389ds/389-ds-base/pull/5893 to see if it makes any difference.

Comment 8 Viktor Ashirov 2023-08-16 13:49:19 UTC
Created attachment 1983610 [details]
pstack with turbo mode removed

Test still hands with turbo mode completely removed.

Comment 9 RHEL Program Management 2023-08-30 14:53:39 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 10 RHEL Program Management 2023-08-30 14:54:49 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues.


Note You need to log in before you can comment on or make changes to this bug.