Hide Forgot
Created attachment 810163 [details] Output of ldclt doing ldapsearch Description of problem: Doing simple ldclt against latest DS on RHEL65 reveals huge performance drop - ldclt`s thread frequently hit error -1 Can't contact LDAP server and the number of performed searches is often 0 for 10-30 seconds. Performance drop happens within 20 seconds from ldclt start. Version-Release number of selected component (if applicable): 389-ds-base-1.2.11.15-28.el6.x86_64 on RHEL6.5-20130830.2 Server x86_64 How reproducible: always Steps to Reproduce: 1. Set up DS instance with two users uid=tuser1,ou=people,dc=example,dc=com and uid=tuser2 .. 2. Run ldclt from another machine like this: ldclt -h <server IP> -p 389 -D "uid=tuser1,ou=people,dc=example,dc=com" -w Secret123 -b "ou=people,dc=example,dc=com" -f "uid=tuser2" -e bindonly -e bindeach -e esearch -n<number of threads> -v -q -I-1 note -I-1 option - this is necessary, otherwise ldclt will end fairly quickly after start. Also, I use IP address of server to rule out DNS latency. Output of ldclt is attached. 3. obtain stacktrace of running DS with gdb -ex 'set confirm off' -ex 'set pagination off' -ex 'thread apply all bt full' -ex 'quit' /usr/sbin/ns-slapd `pidof ns-slapd` > stacktrace.`date +%s`.txt 2>&1 All DS instance threads are either in DS_Sleep or pt_TimedWait apart from thread 1 doing poll on fds, when ldclt reports 0 operations per thread. DS instance is fine and running, manual ldapsearch from another machine is successful. Nothing in error log, access log looks like this: [09/Oct/2013:15:42:17 -0400] conn=1 fd=64 slot=64 connection from 10.16.45.237 to 10.16.42.47 [09/Oct/2013:15:42:17 -0400] conn=2 fd=65 slot=65 connection from 10.16.45.237 to 10.16.42.47 [09/Oct/2013:15:42:17 -0400] conn=3 fd=66 slot=66 connection from 10.16.45.237 to 10.16.42.47 [09/Oct/2013:15:42:17 -0400] conn=4 fd=67 slot=67 connection from 10.16.45.237 to 10.16.42.47 [09/Oct/2013:15:42:17 -0400] conn=1 op=0 BIND dn="uid=tuser1,ou=people,dc=example,dc=com" method=128 version=3 [09/Oct/2013:15:42:17 -0400] conn=4 op=0 BIND dn="uid=tuser1,ou=people,dc=example,dc=com" method=128 version=3 [09/Oct/2013:15:42:17 -0400] conn=3 op=0 BIND dn="uid=tuser1,ou=people,dc=example,dc=com" method=128 version=3 [09/Oct/2013:15:42:17 -0400] conn=2 op=0 BIND dn="uid=tuser1,ou=people,dc=example,dc=com" method=128 version=3 [09/Oct/2013:15:42:17 -0400] conn=1 op=0 RESULT err=32 tag=97 nentries=0 etime=0 [09/Oct/2013:15:42:17 -0400] conn=3 op=0 RESULT err=32 tag=97 nentries=0 etime=0 [09/Oct/2013:15:42:17 -0400] conn=2 op=0 RESULT err=32 tag=97 nentries=0 etime=0 [09/Oct/2013:15:42:17 -0400] conn=4 op=0 RESULT err=32 tag=97 nentries=0 etime=0 [09/Oct/2013:15:42:17 -0400] conn=1 op=1 UNBIND [09/Oct/2013:15:42:17 -0400] conn=1 op=1 fd=64 closed - U1 [09/Oct/2013:15:42:17 -0400] conn=3 op=1 UNBIND [09/Oct/2013:15:42:17 -0400] conn=4 op=1 UNBIND [09/Oct/2013:15:42:17 -0400] conn=3 op=1 fd=66 closed - U1 [09/Oct/2013:15:42:17 -0400] conn=2 op=1 UNBIND [09/Oct/2013:15:42:17 -0400] conn=4 op=1 fd=67 closed - U1 [09/Oct/2013:15:42:17 -0400] conn=2 op=1 fd=65 closed - U1 [09/Oct/2013:15:42:18 -0400] conn=5 fd=64 slot=64 connection from 10.16.45.237 to 10.16.42.47 [09/Oct/2013:15:42:18 -0400] conn=6 fd=65 slot=65 connection from 10.16.45.237 to 10.16.42.47 [09/Oct/2013:15:42:18 -0400] conn=7 fd=66 slot=66 connection from 10.16.45.237 to 10.16.42.47 [09/Oct/2013:15:42:18 -0400] conn=6 op=0 BIND dn="uid=tuser1,ou=people,dc=example,dc=com" method=128 version=3 [09/Oct/2013:15:42:18 -0400] conn=5 op=0 BIND dn="uid=tuser1,ou=people,dc=example,dc=com" method=128 version=3 [09/Oct/2013:15:42:18 -0400] conn=7 op=0 BIND dn="uid=tuser1,ou=people,dc=example,dc=com" method=128 version=3 [09/Oct/2013:15:42:18 -0400] conn=8 fd=67 slot=67 connection from 10.16.45.237 to 10.16.42.47 ... This was reproduced on 2 different machines - both had 8 CPU cores. This bug was reproduced when trying to verify bug 966781, and since 966781 cannot be veried without above ldclt working, this is a blocker bug.
Created attachment 810164 [details] Generated stacktrace - DS when hung up
Created attachment 810179 [details] Stacktrace of ldclt during search
(In reply to Ján Rusnačko from comment #1) > Created attachment 810164 [details] > Generated stacktrace - DS when hung up The server looks idle to me. All the worker threads are not doing any job.
(In reply to Ján Rusnačko from comment #0) > Created attachment 810163 [details] > Output of ldclt doing ldapsearch ldclt version 4.23 /usr/bin/ldclt-bin -h ibm-x3650m4-02-vm-02.lab.eng.bos.redhat.com -p 389 -D uid=tuser1,ou=people,dc=example,dc=com -w Secret123 -b ou=people,dc=example,dc=com -f uid=tuser2 -e bindonly -e bindeach -e esearch -n 8 -v -q -I-1 I'm wondering if you need both "bindonly" and "esearch"? Could you try running the command line without "-e bindonly"?
(In reply to Noriko Hosoi from comment #6) > (In reply to Ján Rusnačko from comment #0) > > Created attachment 810163 [details] > > Output of ldclt doing ldapsearch > > ldclt version 4.23 > /usr/bin/ldclt-bin -h ibm-x3650m4-02-vm-02.lab.eng.bos.redhat.com -p 389 -D > uid=tuser1,ou=people,dc=example,dc=com -w Secret123 -b > ou=people,dc=example,dc=com -f uid=tuser2 -e bindonly -e bindeach -e esearch > -n 8 -v -q -I-1 > > I'm wondering if you need both "bindonly" and "esearch"? Could you try > running the command line without "-e bindonly"? I see the same behavior without -e bindonly.
bindeach and bindonly are more critical to the reproduction of this bug. The bug is that the server becomes so busy processing new LDAP connections that it cannot process incoming LDAPS connections (or LDAPI). The new connection happens with ldap bind. So ideally you would have a client that does nothing but connect/bind/disconnect. That is supposed to be what -e bindonly -e bindeach does.
I am still unable to reproduce this behavior. I see some slight temporary dips in performance, but nothing specific with LDAP verses LDAPS. I had a separate client machine issuing non-SSL traffic(tested 1 thread up to 20 threads), then I introduce SSL traffic, and it still runs just fine. Not sure what else I can do on this one.