RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1478619 - crash in GSSAPI bind: assert fail
Summary: crash in GSSAPI bind: assert fail
Keywords:
Status: CLOSED DUPLICATE of bug 1466441
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: krb5
Version: 7.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Robbie Harwood
QA Contact: BaseOS QE Security Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-05 07:28 UTC by German Parente
Modified: 2020-12-14 09:22 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-22 14:46:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description German Parente 2017-08-05 07:28:57 UTC
Description of problem:

during a GSSAPI bind an assert is failing and the ns-slapd server crashes.

Just as a side note, both times we have seen this crash was due to replication bind.


Version-Release number of selected component (if applicable):

389-ds-base-1.3.6.1-16.el7
krb5-libs-1.15.1-8.el7.x86_64  

This is the backtrace.

#0  0x00007f022a14e1f7 in __GI_raise (sig=sig@entry=6)
    at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007f022a14f8e8 in __GI_abort () at abort.c:90
#2  0x00007f022a147266 in __assert_fail_base (
    fmt=0x7f022a299e68 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", 
    assertion=assertion@entry=0x7f022c6b9e35 "r == 0", 
    file=file@entry=0x7f022c6b9e18 "../../../include/k5-thread.h", 
    line=line@entry=366, 
    function=function@entry=0x7f022c6b9eb0 <__PRETTY_FUNCTION__.5482> "k5_mutex_lock") at assert.c:92
#3  0x00007f022a147312 in __GI___assert_fail (
    assertion=assertion@entry=0x7f022c6b9e35 "r == 0", 
    file=file@entry=0x7f022c6b9e18 "../../../include/k5-thread.h", 
    line=line@entry=366, 
    function=function@entry=0x7f022c6b9eb0 <__PRETTY_FUNCTION__.5482> "k5_mutex_lock") at assert.c:101
#4  0x00007f022c64ea43 in k5_mutex_lock (m=0x55e9200f3f08)
    at ../../../include/k5-thread.h:366
#5  0x00007f022c658dc0 in k5_mutex_lock (m=0x55e9200f3f08) at ccbase.c:468
#6  k5_cc_mutex_lock (context=context@entry=0x55e912e12b20, 
    m=m@entry=0x55e9200f3f08) at ccbase.c:460
#7  0x00007f022c66191f in krb5_mcc_store (ctx=0x55e912e12b20, 
    id=<optimized out>, creds=<optimized out>) at cc_memory.c:623
#8  0x00007f022c664780 in krb5_cc_store_cred (
    context=context@entry=0x55e912e12b20, cache=0x55e9242006a0, 
    creds=0x55e92101f280) at ccfns.c:88
#9  0x00007f022c675e93 in complete (context=context@entry=0x55e912e12b20, 
    ctx=ctx@entry=0x55e91302f9c0) at get_creds.c:443
#10 0x00007f022c676e51 in step_non_referral (ctx=<optimized out>, 
    context=<optimized out>) at get_creds.c:466
#11 krb5_tkt_creds_step (context=context@entry=0x55e912e12b20, 
    ctx=ctx@entry=0x55e91302f9c0, in=in@entry=0x7f01f341be30, 
    out=out@entry=0x7f01f341be20, realm=realm@entry=0x7f01f341be40, 
    flags=flags@entry=0x7f01f341be18) at get_creds.c:1254
#12 0x00007f022c6778e7 in krb5_tkt_creds_get (
    context=context@entry=0x55e912e12b20, ctx=0x55e91302f9c0)
    at get_creds.c:1190
#13 0x00007f022c677a0c in krb5_get_credentials (
    context=context@entry=0x55e912e12b20, options=options@entry=0, 
    ccache=<optimized out>, in_creds=in_creds@entry=0x7f01f341bfa0, 
    out_creds=out_creds@entry=0x7f01f341bf98) at get_creds.c:1279
#14 0x00007f0223b2ff2c in get_credentials (server=0x55e923db31d0, 
    out_creds=<synthetic pointer>, endtime=<optimized out>, now=1501879667, 
    cred=0x55e91f810b40, context=0x55e912e12b20) at init_sec_context.c:170
#15 kg_new_connection (exts=0x7f01f341c230, context=0x55e912e12b20, 
    time_rec=0x0, ret_flags=0x7f01f341c3bc, output_token=0x7f01f341c3d0, 
    actual_mech_type=0x0, input_token=<optimized out>, 
    input_chan_bindings=0x0, time_req=<optimized out>, 
    req_flags=<optimized out>, mech_type=0x7f0223d4e5c0 <krb5_gss_oid_array>, 
    target_name=<optimized out>, context_handle=0x55e921c3ba90, 
    cred=0x55e91f810b40, minor_status=0x7f01f341c3b4) at init_sec_context.c:588
#16 krb5_gss_init_sec_context_ext (
    minor_status=minor_status@entry=0x7f01f341c3b4, 
    claimant_cred_handle=0x55e91f810b40, claimant_cred_handle@entry=0x0, 
    context_handle=context_handle@entry=0x55e921c3ba90, 
    target_name=<optimized out>, 
    mech_type=0x7f0223d4e5c0 <krb5_gss_oid_array>, 
    req_flags=req_flags@entry=58, time_req=time_req@entry=0, 
    input_chan_bindings=input_chan_bindings@entry=0x0, 
    input_token=input_token@entry=0x0, 
    actual_mech_type=actual_mech_type@entry=0x0, 
    output_token=output_token@entry=0x7f01f341c3d0, 
    ret_flags=ret_flags@entry=0x7f01f341c3bc, time_rec=time_rec@entry=0x0, 
    exts=exts@entry=0x7f01f341c230) at init_sec_context.c:988
#17 0x00007f0223b308b7 in krb5_gss_init_sec_context (
    minor_status=minor_status@entry=0x7f01f341c3b4, 
    claimant_cred_handle=claimant_cred_handle@entry=0x0, 
    context_handle=context_handle@entry=0x55e921c3ba90, 
    target_name=<optimized out>, mech_type=<optimized out>, 
    req_flags=req_flags@entry=58, time_req=time_req@entry=0, 
    input_chan_bindings=input_chan_bindings@entry=0x0, 
    input_token=input_token@entry=0x0, 
    actual_mech_type=actual_mech_type@entry=0x0, 
    output_token=output_token@entry=0x7f01f341c3d0, 
    ret_flags=ret_flags@entry=0x7f01f341c3bc, time_rec=time_rec@entry=0x0)
    at init_sec_context.c:1090
#18 0x00007f0223b19dfb in gss_init_sec_context (
    minor_status=minor_status@entry=0x7f01f341c3b4, 
    claimant_cred_handle=claimant_cred_handle@entry=0x0, 
    context_handle=context_handle@entry=0x55e91f5fc1d8, 
    target_name=0x55e9210f18f0, req_mech_type=<optimized out>, 
    req_flags=req_flags@entry=58, time_req=time_req@entry=0, 
    input_chan_bindings=input_chan_bindings@entry=0x0, 
    input_token=input_token@entry=0x0, 
    actual_mech_type=actual_mech_type@entry=0x0, 
    output_token=output_token@entry=0x7f01f341c3d0, 
    ret_flags=ret_flags@entry=0x7f01f341c3bc, time_rec=time_rec@entry=0x0)
    at g_init_sec_context.c:210
#19 0x00007f0223d53ef4 in gssapi_client_mech_step (
    conn_context=0x55e91f5fc1c0, params=0x55e91f6d43c0, 
    serverin=<optimized out>, serverinlen=<optimized out>, 
    prompt_need=<optimized out>, clientout=0x7f01f341c5a8, 
    clientoutlen=0x7f01f341c580, oparams=0x55e91f920c70) at gssapi.c:1693
#20 0x00007f022ad537f5 in sasl_client_step (conn=conn@entry=0x55e91f920400, 
    serverin=serverin@entry=0x0, serverinlen=serverinlen@entry=0, 
    prompt_need=prompt_need@entry=0x7f01f341c590, 
    clientout=clientout@entry=0x7f01f341c5a8, 
    clientoutlen=clientoutlen@entry=0x7f01f341c580) at client.c:1000
#21 0x00007f022ad53b76 in sasl_client_start (conn=conn@entry=0x55e91f920400, 
    mechlist=mechlist@entry=0x7f021f04d1b5 "GSSAPI", 
    prompt_need=prompt_need@entry=0x7f01f341c590, 
    clientout=clientout@entry=0x7f01f341c5a8, 
    clientoutlen=clientoutlen@entry=0x7f01f341c580, 
    mech=mech@entry=0x7f01f341c588) at client.c:946
#22 0x00007f022bd4f3f3 in ldap_int_sasl_bind (ld=ld@entry=0x55e9284b3300, 
    dn=dn@entry=0x55e920828358 "", mechs=mechs@entry=0x7f021f04d1b5 "GSSAPI", 
    sctrls=sctrls@entry=0x0, cctrls=cctrls@entry=0x0, flags=flags@entry=2, 
    interact=interact@entry=0x7f022cbf8400 <ldap_sasl_interact_cb>, 
    defaults=defaults@entry=0x55e9210f2a60, result=result@entry=0x0, 
    rmech=0x7f01f341c7b8, msgid=0x7f01f341c7b4) at cyrus.c:523
#23 0x00007f022bd52bc2 in ldap_sasl_interactive_bind (
    ld=ld@entry=0x55e9284b3300, dn=dn@entry=0x55e920828358 "", 
    mechs=mechs@entry=0x7f021f04d1b5 "GSSAPI", 
    serverControls=serverControls@entry=0x0, 
    clientControls=clientControls@entry=0x0, flags=flags@entry=2, 
    interact=interact@entry=0x7f022cbf8400 <ldap_sasl_interact_cb>, 
    defaults=defaults@entry=0x55e9210f2a60, result=0x0, 
    rmech=rmech@entry=0x7f01f341c7b8, msgid=msgid@entry=0x7f01f341c7b4)
    at sasl.c:487
#24 0x00007f022bd52ded in ldap_sasl_interactive_bind_s (
    ld=ld@entry=0x55e9284b3300, dn=dn@entry=0x55e920828358 "", 
    mechs=mechs@entry=0x7f021f04d1b5 "GSSAPI", 
    serverControls=serverControls@entry=0x0, 
    clientControls=clientControls@entry=0x0, flags=flags@entry=2, 
    interact=interact@entry=0x7f022cbf8400 <ldap_sasl_interact_cb>, 
    defaults=defaults@entry=0x55e9210f2a60) at sasl.c:521
#25 0x00007f022cbf9ea6 in slapd_ldap_sasl_interactive_bind (
    ld=ld@entry=0x55e9284b3300, bindid=bindid@entry=0x55e920828358 "", 
    creds=0x55e91f7e5350 "", mech=<optimized out>, 
    mech@entry=0x7f021f04d1b5 "GSSAPI", serverctrls=serverctrls@entry=0x0, 
    returnedctrls=returnedctrls@entry=0x7f01f341ca08, msgidp=msgidp@entry=0x0)
    at ldap/servers/slapd/ldaputil.c:1628
#26 0x00007f022cbfb039 in slapi_ldap_bind (ld=ld@entry=0x55e9284b3300, 
    bindid=bindid@entry=0x55e920828358 "", creds=<optimized out>, 
    mech=<optimized out>, mech@entry=0x7f021f04d1b5 "GSSAPI", 
    serverctrls=serverctrls@entry=0x0, 
    returnedctrls=returnedctrls@entry=0x7f01f341ca08, 
    timeout=timeout@entry=0x55e91f5f2e98, msgidp=msgidp@entry=0x0)
    at ldap/servers/slapd/ldaputil.c:1311
#27 0x00007f021f009c00 in bind_and_check_pwp (password=<optimized out>, 
    binddn=0x55e920828358 "", conn=0x55e91f5f2e10)
    at ldap/servers/plugins/replication/repl5_connection.c:2064
#28 conn_connect (conn=conn@entry=0x55e91f5f2e10)
    at ldap/servers/plugins/replication/repl5_connection.c:1314
#29 0x00007f021f013a09 in acquire_replica (prp=prp@entry=0x55e91f819180, 
    prot_oid=<optimized out>, 
    prot_oid@entry=0x7f021f04b0cd "2.16.840.1.113730.3.6.1", 
    ruv=ruv@entry=0x7f01f341cc08)
    at ldap/servers/plugins/replication/repl5_protocol_util.c:140
#30 0x00007f021f00e019 in repl5_inc_run (prp=0x55e91f819180)
    at ldap/servers/plugins/replication/repl5_inc_protocol.c:915
#31 0x00007f021f01303c in prot_thread_main (arg=0x55e91f5e68f0)
    at ldap/servers/plugins/replication/repl5_protocol.c:267
#32 0x00007f022af8f9bb in _pt_root (arg=0x55e91f6205a0)
    at ../../../nspr/pr/src/pthreads/ptthread.c:216
#33 0x00007f022a92fe25 in start_thread (arg=0x7f01f341d700)
    at pthread_create.c:308
#34 0x00007f022a21134d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113


Seems this assert is failing:

(gdb) frame 4
#4  0x00007f022c64ea43 in k5_mutex_lock (m=0x55e9200f3f08)
    at ../../../include/k5-thread.h:366
366	    assert(r == 0);
(gdb) list
361	    (k5_os_mutex_destroy(M))
362	
363	static inline void k5_mutex_lock(k5_mutex_t *m)
364	{
365	    int r = k5_os_mutex_lock(m);
366	    assert(r == 0);
367	}
368	
369	static inline void k5_mutex_unlock(k5_mutex_t *m)

Comment 12 German Parente 2017-09-22 12:52:11 UTC
Hi,

re-opening this bug because I have seen it several times at customer session.

The replicas were in sync. But the server was crashing intermittently in this ASSERT.

I think we should try some investigation on it.

thanks,

German.

Comment 13 Robbie Harwood 2017-09-22 14:46:51 UTC
(In reply to German Parente from comment #12)
> Hi,
> 
> re-opening this bug because I have seen it several times at customer session.

Please use the other, existing bug.

> The replicas were in sync. But the server was crashing intermittently in
> this ASSERT.
>
> I think we should try some investigation on it.

Do you have a traceback you can give me?  Preferably on 7.4?  (And a measure of how far the clocks are apart?)

*** This bug has been marked as a duplicate of bug 1466441 ***


Note You need to log in before you can comment on or make changes to this bug.