Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1221759

Summary: [abrt] Crash under book_backend_finalize, g_mutex_clear
Product: Red Hat Enterprise Linux 7 Reporter: Vadim Rutkovsky <vrutkovs>
Component: evolution-data-serverAssignee: Matthew Barnes <mbarnes>
Status: CLOSED WONTFIX QA Contact: Desktop QE <desktop-qa-list>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 7.2CC: mcrha, tpelka, vbenes, vrutkovs
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
URL: http://faf-report.itos.redhat.com/reports/bthash/b03ae3c84c39d3086dec36e001d1d97aaff976c0/
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-09 17:35:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vadim Rutkovsky 2015-05-14 18:39:47 UTC
Crashed after stopping gdm service

Report URL: http://faf-report.itos.redhat.com/reports/bthash/b03ae3c84c39d3086dec36e001d1d97aaff976c0/

Comment 1 Milan Crha 2015-05-15 11:14:51 UTC
Thanks for a bug report. The [faf] backtrace looks like:
   raise
   abort
   g_mutex_clear
   book_backend_finalize
   g_object_unref
   data_book_view_dispose
   g_object_unref
   e_book_backend_remove_view
   impl_DataBookView_dispose

(In reply to Vadim Rutkovsky from comment #0)
> Crashed after stopping gdm service

What do you mean with this, please? Can you reproduce it after certain steps?

The backtrace suggests that there was a book backend opened in the addressbook factory which had a view running, which was finally closing itself. This close caused a crash, most likely due to reference counting imbalance in the code, causing a use-after-free in the book_backend_finalize(). That means that there might be some applications involved, which had the book opened, in time of the "stopping gdm service" action. There can be involved also the evolution-alarm-notify, through evolution-calendar-factory with the Birthdays & Anniversaries calendar, in case it being marked to show reminders about birthdays and anniversaries.

Comment 2 Vadim Rutkovsky 2015-05-15 11:29:54 UTC
(In reply to Milan Crha from comment #1)
> (In reply to Vadim Rutkovsky from comment #0)
> > Crashed after stopping gdm service
> 
> What do you mean with this, please? Can you reproduce it after certain steps?

Its reproducible during a massive run of EDS unittests, each of them is started sequentially within dogtail-run-headless: <gdm startup> - <unit test runs> - <gdm is stopped>. During one of those cycles the crash happens, I'll check the logs again to see if there is a specific pattern.

This isn't likely to happen frequently on real systems, but nevertheless possible - Fedora users have hit this at least 4 times, see https://retrace.fedoraproject.org/faf/reports/507034/

> There can be involved also the evolution-alarm-notify, through
> evolution-calendar-factory with the Birthdays & Anniversaries calendar, in
> case it being marked to show reminders about birthdays and anniversaries.

Right, I'll check those cases, as they seem to be most prone to this crash

Comment 3 Milan Crha 2015-05-21 14:51:04 UTC
I tried to reproduce this, just in case, but no luck. I confess I've not much idea what to try. I also run evolution-addressbook-factory under valgrind, but that didn't show anything. My valgrind command was:
 $ G_SLICE=always-malloc valgrind /usr/libexec/evolution-addressbook-factory -w
then wait a bit and when the CPU usage goes back down run your tests.

Comment 4 Milan Crha 2016-01-14 16:43:47 UTC
Vadim, any update on this, please? I do not see any crash for 2 months in the FAF report, but that might not mean much. Did you manage to check the logs, please?

Comment 5 Milan Crha 2016-08-16 15:38:45 UTC
*** Bug 1367392 has been marked as a duplicate of this bug. ***

Comment 6 Milan Crha 2017-03-09 17:05:00 UTC
*** Bug 1349750 has been marked as a duplicate of this bug. ***

Comment 7 Milan Crha 2017-03-09 17:35:36 UTC
As I wrote in bug #1349750: I do not have any relevant upstream bug reports, neither fixes I can think of. I see that this happened only once since filled, thus I consider this pretty hard to reproduce, thus unless we have any good reproducer I'm afraid I cannot do anything about it.

Comment 8 Red Hat Bugzilla Rules Engine 2017-03-09 17:35:41 UTC
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.