Description of problem:
adding kerberos account into online accounts
Version-Release number of selected component:
runlevel: N 5
Potential duplicate: bug 1272252
Created attachment 1320080 [details]
Created attachment 1320081 [details]
Created attachment 1320082 [details]
Created attachment 1320083 [details]
Created attachment 1320084 [details]
Created attachment 1320085 [details]
Created attachment 1320086 [details]
Created attachment 1320087 [details]
Created attachment 1320088 [details]
Created attachment 1320089 [details]
Created attachment 1320090 [details]
Thanks for a bug report. I see from the backtrace that evolution-source-registry received a property change through D-Bus related to GOA account, while most of the other threads had been stuck (waiting for processing) in goa_client_new_sync() call.
The thing is that this crashed in gio/GDBus code, somewhere in glib2, which evolution has no control of, thus I'm moving this to glib2 for further investigation.
*** Bug 1489650 has been marked as a duplicate of this bug. ***
Out of interest, what configured GOA account types are shown in Settings->Online Accounts for you, please? I have there one nextCloud account and one Google, the nextCloud has all options enabled, while the Google account only mail, contacts and calendar, all the rest is disabled. I do not have any other accounts configured there.
Sorry, dont remember exactly, but I think it was two Google accounts (sync of all) and Facebook account and while adding kerberos account, it crashed.
But unable to reproduce the crash now.
Jan's backtrace shows property change notification for account:
contain that account (account_1504024844_0)? If it does, what is the Provider key there, please?
Similarly for Matthew, where it's account_1499893425_1.
I do not know the internals of GOA, but it looks like that the goa-daemon decided to notify about some changes for some account, in Matthew's case just after login, which evolution-source-registry noticed too, due to the GoaObject being involved, which may, eventually, be the forgotten signal handler from. But it's only a wild guess.
Looking at attachment 1320080 [details], it seems that threads 3 to 9 are all trying to synchronously create a GoaClient instance. Would it be possible to create one instance and share it across the whole process? That might make it easier to track thing like these because we'd be dealing with the state of one GoaClient object instead of many.
Every GoaClient has a GDBusObjectManagerClient.
Looking at thread #1 in attachment 1320080 [details], it seems that GDBusObjectManagerClient::signal_cb is being invoked with an invalid GDBusObjectManagerClient object (ie. user_data/manager). That's the only way for the GHashTable to be invalid. However, I don't see any CRITICALs in the session logs from a failed cast.
g_dbus_connection_signal_subscribe keeps track of the thread-default GMainContext at the time of its invocation, so that the callback (ie. signal_cb) is invoked on the same context. In this case, signal_cb is crashing from thread #1. So, does e-d-s have another GoaClient that's on thread #1?
(In reply to Debarshi Ray from comment #19)
> I don't see any CRITICALs in the session logs from a failed cast.
I noticed it's pretty common that the log doesn't contain the most recent claims. It can be something about flushing disk buffers before ABRT copies the file, but it's only my personal guess. Or there is something about user session and root access to those files, I do not recall precisely, there just heard something about it from ABRT developers in the semi-distant past.
> g_dbus_connection_signal_subscribe keeps track of the thread-default
> GMainContext at the time of its invocation, so that the callback (ie.
> signal_cb) is invoked on the same context. In this case, signal_cb is
> crashing from thread #1. So, does e-d-s have another GoaClient that's on
> thread #1?
As far as I can tell, it's only the e_goa_password_based_lookup_sync() calling goa_client_new_sync(). While the backtrace shows that it's always the same EGoaPasswordBased object, it doesn't share the GoaClient object, it creates it on demand and then frees it immediately afterwards. Looking into the backtrace of Thread 3 and in those related functions, none of them sets thread default main context, thus the GDBus uses the main context from the main() thread, as expected.
It can also mean that the problem is in GDBus, no? Looking into gdbusobjectmanagerclient.c, they unsubscribe from the signal in finalize(), not in dispose(), which might be better.
By the way, do you think the way evolution-data-server code is written regarding the password-based GOA object is conceptually wrong? I think I can change it, maybe even for cases when the GOA daemon dies, but I'm afraid it would just hide the issue, wherever it is.
I made some changes on the evolution-data-server side  to reuse the GoaClient object when possible. It does not necessarily fix this particular crash, but I guess it'll make things less evil. The change is for 3.27.1+ and 3.26.1+.
This message is a reminder that Fedora 26 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 26. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora 'version'
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.
Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 26 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.
Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
Fedora 26 changed to end-of-life (EOL) status on 2018-05-29. Fedora 26
is no longer maintained, which means that it will not receive any
further security or bug fix updates. As a result we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
Thank you for reporting this bug and we are sorry it could not be fixed.