Description of problem: sssd configured w/ AD as access & id provider; AD has gazillions of users/groups & group re-mapping in place. Finally came up w/ a sssd.conf that returns sensible values for groups, but not w/out causing this dump first. Dump caused when doing a 'id <ad-userid>' Version-Release number of selected component: sssd-common-1.11.6-1.fc20 Additional info: reporter: libreport-2.2.3 backtrace_rating: 4 cmdline: /usr/libexec/sssd/sssd_be --domain ******.******.edu --debug-to-files crash_function: talloc_abort executable: /usr/libexec/sssd/sssd_be kernel: 3.15.7-200.fc20.x86_64 runlevel: N 5 type: CCpp uid: 0 Truncated backtrace: Thread no. 1 (10 frames) #2 talloc_abort at ../talloc.c:338 #3 talloc_abort_access_after_free at ../talloc.c:357 #4 talloc_chunk_from_ptr at ../talloc.c:378 #6 talloc_get_name at ../talloc.c:1353 #7 _talloc_get_type_abort at ../talloc.c:1406 #8 sdap_nested_group_process_done at src/providers/ldap/sdap_async_nested_groups.c:935 #9 _tevent_req_error at ../tevent_req.c:135 #11 sdap_nested_group_process_done at src/providers/ldap/sdap_async_nested_groups.c:975 #12 _tevent_req_error at ../tevent_req.c:135 #13 sdap_nested_group_deref_direct_done at src/providers/ldap/sdap_async_nested_groups.c:2282 Potential duplicate: bug 908759
Created attachment 923983 [details] File: backtrace
Created attachment 923984 [details] File: cgroup
Created attachment 923985 [details] File: core_backtrace
Created attachment 923986 [details] File: dso_list
Created attachment 923987 [details] File: environ
Created attachment 923988 [details] File: limits
Created attachment 923989 [details] File: maps
Created attachment 923990 [details] File: open_fds
Created attachment 923991 [details] File: proc_pid_status
Created attachment 923992 [details] File: var_log_messages
Pavel, could this be the bug you fixed the other day? The one where we didn't return from a function after calling tevent_req_done (or tevent_req_error, I'm not sure from the top of my head..)
Hello Jakub, I'm not sure, it's definitely in relevant part of code but it's not easy to say. Hello Mark, are you able to replicate the problem easily? Would you consider setting debug_level=0xfff0 in relevant domain in sssd.conf and provide us with logs (/var/log/sssd/)? I would also like to prepare a scratch build with patch mentioned above by Jakub. Would you be willing to test, if it resolves the issue for you.
New version of sssd 1.11.7-2.fc20 is available update-testing for Fedora 20. It contains lots of bug fixes. Can you reproduce crash with this package?
I'm unable to recreate this problem. Have been pulled off the project that had access to the AD data in question.
We don't have a coredump or any logfiles and you are not able to reproduce this crash. I will close this bug as insufficient_data. Feel free to reopen it with more data.
I reopening this ticket. We identified problem in upstream so we can fix it in Fedora as well.
Upstream ticket: https://fedorahosted.org/sssd/ticket/2531
sssd-1.11.7-5.fc20 has been submitted as an update for Fedora 20. https://admin.fedoraproject.org/updates/sssd-1.11.7-5.fc20
Package sssd-1.11.7-5.fc20: * should fix your issue, * was pushed to the Fedora 20 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing sssd-1.11.7-5.fc20' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2015-1449/sssd-1.11.7-5.fc20 then log in and leave karma (feedback).
sssd-1.11.7-5.fc20 has been pushed to the Fedora 20 stable repository. If problems still persist, please make note of it in this bug report.