Bug 1392444
| Summary: | sssd_be keeps crashing | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Ming Davies <minyu> | ||||||||
| Component: | sssd | Assignee: | SSSD Maintainers <sssd-maint> | ||||||||
| Status: | CLOSED ERRATA | QA Contact: | Madhuri <mupadhye> | ||||||||
| Severity: | unspecified | Docs Contact: | |||||||||
| Priority: | high | ||||||||||
| Version: | 7.3 | CC: | cww, fjayalat, gparente, grajaiya, jhrozek, jstephen, lslebodn, minyu, mkolaja, mkosek, mzidek, pbrezina, sbose, sgoveas, sssd-maint, striker, toby, tscherf | ||||||||
| Target Milestone: | rc | Keywords: | ZStream | ||||||||
| Target Release: | --- | ||||||||||
| Hardware: | Unspecified | ||||||||||
| OS: | Unspecified | ||||||||||
| Whiteboard: | |||||||||||
| Fixed In Version: | sssd-1.15.0-1.el7 | Doc Type: | If docs needed, set a value | ||||||||
| Doc Text: | Story Points: | --- | |||||||||
| Clone Of: | |||||||||||
| : | 1396485 (view as bug list) | Environment: | |||||||||
| Last Closed: | 2017-08-01 09:00:03 UTC | Type: | Bug | ||||||||
| Regression: | --- | Mount Type: | --- | ||||||||
| Documentation: | --- | CRM: | |||||||||
| Verified Versions: | Category: | --- | |||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||
| Embargoed: | |||||||||||
| Bug Depends On: | |||||||||||
| Bug Blocks: | 1396485, 1396494, 1399979 | ||||||||||
| Attachments: |
|
||||||||||
Could you also attach log file with debug_level = 9 in domain section? Upstream ticket: https://fedorahosted.org/sssd/ticket/3234 The crash is triggered by 'auth_provider = krb5' (and 'chpass_provider = krb5'). Setting it to 'ad' or just removing it should fix it. Is there a reason for using 'krb5' and not 'ad' here? I'll try to prepare a fix. Created attachment 1218144 [details]
sssd debug log
Created attachment 1218511 [details]
sssd debug after replacing auth_provider=ad
Created attachment 1218513 [details]
sssd debug after removing ad_provider
*** Bug 1393133 has been marked as a duplicate of this bug. *** *** Bug 1404560 has been marked as a duplicate of this bug. *** Tested with
sssd-1.15.2-29.el7.x86_64
Steps followed during verification:
1) System was updated using # yum update
2) Configured sssd on client.
3) Set id_provider=ad and auth_provider= krb5 in sssd.conf.
4) Started sssd service.
5) Checked user lookup.
form yum.log:
May 15 07:55:20 Updated: python-sssdconfig-1.15.2-29.el7.noarch
May 15 07:56:36 Updated: sssd-client-1.15.2-29.el7.x86_64
May 15 07:57:42 Updated: sssd-common-1.15.2-29.el7.x86_64
May 15 07:57:43 Updated: sssd-krb5-common-1.15.2-29.el7.x86_64
May 15 07:57:45 Updated: sssd-common-pac-1.15.2-29.el7.x86_64
May 15 07:58:02 Updated: sssd-ipa-1.15.2-29.el7.x86_64
May 15 07:58:07 Updated: sssd-krb5-1.15.2-29.el7.x86_64
May 15 07:58:07 Updated: sssd-ldap-1.15.2-29.el7.x86_64
May 15 07:58:08 Updated: sssd-proxy-1.15.2-29.el7.x86_64
May 15 07:58:10 Updated: sssd-ad-1.15.2-29.el7.x86_64
May 15 07:58:41 Updated: sssd-1.15.2-29.el7.x86_64
# cat /etc/sssd/sssd.conf | grep provider
id_provider = ad
auth_provider = krb5
# systemctl status sssd
● sssd.service - System Security Services Daemon
Loaded: loaded (/usr/lib/systemd/system/sssd.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/sssd.service.d
└─journal.conf
Active: active (running) since Mon 2017-05-15 08:26:14 EDT; 17h ago
Main PID: 8516 (sssd)
CGroup: /system.slice/sssd.service
├─8516 /usr/sbin/sssd -i -f
├─8517 /usr/libexec/sssd/sssd_be --domain EXAMPLE.COM --uid 0 --gid 0 --debug-to-files
├─8518 /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --debug-to-files
└─8519 /usr/libexec/sssd/sssd_pam --uid 0 --gid 0 --debug-to-files
# getent passwd administrator
administrator:*:217800500:217800513:Administrator:/home/EXAMPLE.COM/administrator:/bin/bash
# id administrator
uid=217800500(administrator) gid=217800513 groups=217800513
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2294 |
Description of problem: Customer said that SSSD was updated as part of yum update command from RHEL 7.2 to RHEL 7.3, since then sssd keeps crashing whenever they attempted to start sssd.service. messages:Nov 4 12:26:47 koror abrt-hook-ccpp: Process 30558 (sssd_be) of user 0 killed by SIGSEGV - dumping core messages:Nov 4 12:26:47 koror abrt-hook-ccpp: Process 30561 (sssd_be) of user 0 killed by SIGSEGV - ignoring (repeated crash) messages:Nov 4 12:26:49 koror abrt-hook-ccpp: Process 30599 (sssd_be) of user 0 killed by SIGSEGV - ignoring (repeated crash) messages:Nov 4 12:26:53 koror abrt-hook-ccpp: Process 30941 (sssd_be) of user 0 killed by SIGSEGV - ignoring (repeated crash) From yum.log: Nov 04 12:22:18 Updated: sssd-common-1.14.0-43.el7.x86_64 Nov 04 12:22:18 Updated: sssd-krb5-common-1.14.0-43.el7.x86_64 Nov 04 12:22:18 Updated: sssd-common-pac-1.14.0-43.el7.x86_64 Nov 04 12:22:18 Updated: sssd-ad-1.14.0-43.el7.x86_64 Nov 04 12:22:18 Updated: sssd-ipa-1.14.0-43.el7.x86_64 Nov 04 12:22:18 Updated: sssd-ldap-1.14.0-43.el7.x86_64 Nov 04 12:22:18 Updated: sssd-krb5-1.14.0-43.el7.x86_64 Nov 04 12:22:18 Updated: sssd-proxy-1.14.0-43.el7.x86_64 Nov 04 12:22:18 Updated: sssd-1.14.0-43.el7.x86_64 Version-Release number of selected component (if applicable): sssd-1.14.0-43.el7.x86_64 Fri Nov 4 12:22:18 2016 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: From the core dump: Core was generated by `/usr/libexec/sssd/sssd_be --domain redecorp --uid 0 --gid 0 --debug-to-files'. Program terminated with signal 11, Segmentation fault. #0 0x00007fa2766492a2 in ad_subdom_reinit (subdoms_ctx=subdoms_ctx@entry=0x7fa286512aa0) at src/providers/ad/ad_subdomains.c:626 626 canonicalize = dp_opt_get_bool( (gdb) backtrace #0 0x00007fa2766492a2 in ad_subdom_reinit (subdoms_ctx=subdoms_ctx@entry=0x7fa286512aa0) at src/providers/ad/ad_subdomains.c:626 #1 0x00007fa27664afb3 in ad_subdomains_init (mem_ctx=<optimized out>, be_ctx=0x7fa2864b9540, ad_id_ctx=<optimized out>, dp_methods=0x7fa2865186a0) at src/providers/ad/ad_subdomains.c:1528 #2 0x00007fa2857552d6 in dp_target_run_constructor (be_ctx=0x7fa2864b9540, target=0x7fa2864d4bc0) at src/providers/data_provider/dp_targets.c:246 #3 dp_target_init (target=0x7fa2864d4bc0, modules=0x7fa2864d3840, provider=0x7fa2864d3b60, be_ctx=0x7fa2864b9540) at src/providers/data_provider/dp_targets.c:358 #4 dp_load_targets (modules=0x7fa2864d3840, targets=0x7fa2864d4a70, provider=0x7fa2864d3b60, be_ctx=0x7fa2864b9540) at src/providers/data_provider/dp_targets.c:484 #5 dp_init_targets (mem_ctx=mem_ctx@entry=0x7fa2864d3b60, be_ctx=be_ctx@entry=0x7fa2864b9540, provider=provider@entry=0x7fa2864d3b60, modules=0x7fa2864d3840) at src/providers/data_provider/dp_targets.c:530 #6 0x00007fa28575466b in dp_init (ev=0x7fa2864b0c10, be_ctx=be_ctx@entry=0x7fa2864b9540, uid=<optimized out>, gid=0) at src/providers/data_provider/dp.c:120 #7 0x00007fa28574cc77 in be_process_init (mem_ctx=<optimized out>, be_domain=0x7fa2864aa220 "redecorp", uid=<optimized out>, gid=0, ev=0x7fa2864b0c10, cdb=0x7fa2864b2170) at src/providers/data_provider_be.c:450 #8 0x00007fa28574ba59 in main (argc=8, argv=<optimized out>) at src/providers/data_provider_be.c:562 (gdb) frame 1 #1 0x00007fa27664afb3 in ad_subdomains_init (mem_ctx=<optimized out>, be_ctx=0x7fa2864b9540, ad_id_ctx=<optimized out>, dp_methods=0x7fa2865186a0) at src/providers/ad/ad_subdomains.c:1528 1528 ret = ad_subdom_reinit(sd_ctx); (gdb) list 1523 DEBUG(SSSDBG_CRIT_FAILURE, "Unable to setup ptask " 1524 "[%d]: %s\n", ret, sss_strerror(ret)); 1525 /* Ignore, responders will trigger refresh from time to time. */ 1526 } 1527 1528 ret = ad_subdom_reinit(sd_ctx); 1529 if (ret != EOK) { 1530 DEBUG(SSSDBG_MINOR_FAILURE, "Could not reinitialize subdomains. " 1531 "Users from trusted domains might not be resolved correctly\n"); 1532 /* Ignore this error and try to discover the subdomains later */