Description of problem: I do not know, it just crashed Version-Release number of selected component: sssd-common-2.2.2-3.fc31 Additional info: reporter: libreport-2.11.3 backtrace_rating: 4 cgroup: 0::/system.slice/sssd.service cmdline: /usr/libexec/sssd/sssd_be --domain implicit_files --uid 0 --gid 0 --logger=files crash_function: dp_client_register executable: /usr/libexec/sssd/sssd_be journald_cursor: s=a7fa74212aae4cfc8040a6b34ab268c8;i=16da;b=cc63c4befafc43ea801971a845c4ac64;m=42aa18c;t=597d7ca1eebc6;x=4eaac48583ca5150 kernel: 5.3.11-300.fc31.x86_64 rootdir: / runlevel: N 5 type: CCpp uid: 0 Truncated backtrace: Thread no. 1 (10 frames) #0 dp_client_register at src/providers/data_provider/dp_client.c:107 #1 _sbus_sss_invoke_in_s_out__step at src/sss_iface/sbus_sss_invokers.c:682 #2 tevent_common_invoke_timer_handler at ../../tevent_timed.c:370 #3 tevent_common_loop_timer_delay at ../../tevent_timed.c:442 #4 epoll_event_loop_once at ../../tevent_epoll.c:922 #5 std_event_loop_once at ../../tevent_standard.c:110 #6 _tevent_loop_once at ../../tevent.c:772 #7 tevent_common_loop_wait at ../../tevent.c:895 #8 std_event_loop_wait at ../../tevent_standard.c:141 #9 server_loop at src/util/server.c:721 Potential duplicate: bug 1684824
Created attachment 1638838 [details] File: backtrace
Created attachment 1638839 [details] File: core_backtrace
Created attachment 1638840 [details] File: cpuinfo
Created attachment 1638841 [details] File: dso_list
Created attachment 1638842 [details] File: environ
Created attachment 1638843 [details] File: exploitable
Created attachment 1638844 [details] File: limits
Created attachment 1638845 [details] File: maps
Created attachment 1638846 [details] File: mountinfo
Created attachment 1638847 [details] File: open_fds
Created attachment 1638848 [details] File: proc_pid_status
Created attachment 1638849 [details] File: var_log_messages
This can be reproduce by postponing backend startup. --- a/src/providers/data_provider_be.c +++ b/src/providers/data_provider_be.c @@ -702,6 +702,8 @@ int main(int argc, const char *argv[]) uid_t uid; gid_t gid; + sleep(5); + struct poptOption long_options[] = { POPT_AUTOHELP SSSD_MAIN_OPTS There is a race condition when a responder connects to data provider before connection function is set. I.e. sometime between sbus_server_create_and_connect_send and dp_init_done.
It is a race condition: - dp_init_send - sbus_server_create_and_connect_send - sbus_server_create (*) - dp_init_done (callback for sbus_server_create_and_connect_send) - sbus_server_create_and_connect_recv - sbus_server_set_on_connection (sets clients data and creates dp_cli) At (*) sbus server is already created and accepts new connections once we get into tevent loop. So it is possible that the client connects to server before sbus_server_set_on_connection is called and thus the client is not properly initialized. However it should not happen in normal start because providers are started before responders and it can happen only if data provider startup is somehow delay. Flávio, do you have any sssd logs from this crash? Thank you.
I do not know... How I do find those?
(In reply to Flávio Schefer from comment #15) > I do not know... How I do find those? Hi, please add 'debug_level = 9' to the [domain/...] section in /etc/sssd/sssd.conf and restart SSSD. The logs can be found in /var/log/sssd/. If you are interested you can find more details in https://docs.pagure.org/SSSD.sssd/users/troubleshooting.html. HTH bye, Sumit
*** Bug 1785707 has been marked as a duplicate of this bug. ***
*** Bug 1684824 has been marked as a duplicate of this bug. ***
*** Bug 1770467 has been marked as a duplicate of this bug. ***
*** Bug 1768670 has been marked as a duplicate of this bug. ***
*** Bug 1793450 has been marked as a duplicate of this bug. ***
*** Bug 1808849 has been marked as a duplicate of this bug. ***
*** Bug 1822745 has been marked as a duplicate of this bug. ***
Similar problem has been detected: for some reason i leave my computers root partition is filling up when i am logged in reporter: libreport-2.12.0 backtrace_rating: 4 cgroup: 0::/system.slice/sssd.service cmdline: /usr/libexec/sssd/sssd_be --domain implicit_files --uid 0 --gid 0 --logger=files crash_function: dp_client_register executable: /usr/libexec/sssd/sssd_be journald_cursor: s=dd23ba7da38f49cf939e8bee5ee2453c;i=1e11;b=778e80a92aa24301b6b85bc4ad76fc73;m=a2717ee96;t=5a3a55ecf10ac;x=29abd41eb4e243e kernel: 5.5.17-200.fc31.x86_64 package: sssd-common-2.2.3-13.fc31 reason: sssd_be killed by SIGSEGV rootdir: / runlevel: N 5 type: CCpp uid: 0
*** Bug 1826238 has been marked as a duplicate of this bug. ***
*** Bug 1773758 has been marked as a duplicate of this bug. ***
FEDORA-2020-63a418c824 has been submitted as an update to Fedora 32. https://bodhi.fedoraproject.org/updates/FEDORA-2020-63a418c824
This is still an issue and was included in the update by accident.
Upstream ticket: https://github.com/SSSD/sssd/issues/5298
Upstream PR: https://github.com/SSSD/sssd/pull/5299
Pushed PR: https://github.com/SSSD/sssd/pull/5299 * `master` * 4a84f8e18ea5604ac7e69849dee492718fd96296 - dp: fix potential race condition in provider's sbus server
*** Bug 1883228 has been marked as a duplicate of this bug. ***
Pushed PR: https://github.com/SSSD/sssd/pull/5344 * `master` * 7fbcaa8feeb968711ff52f51705c45062fd81394 - be: remove accidental sleep
*** Bug 1886436 has been marked as a duplicate of this bug. ***
FEDORA-2020-5a1603a348 has been submitted as an update to Fedora 32. https://bodhi.fedoraproject.org/updates/FEDORA-2020-5a1603a348
FEDORA-2020-5a1603a348 has been pushed to the Fedora 32 testing repository. In short time you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2020-5a1603a348` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2020-5a1603a348 See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
FEDORA-2020-5a1603a348 has been pushed to the Fedora 32 stable repository. If problem still persists, please make note of it in this bug report.