RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 696620 - Crash in retrieval_done of OnTheWeb calendar
Summary: Crash in retrieval_done of OnTheWeb calendar
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: evolution-data-server
Version: 6.1
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Matthew Barnes
QA Contact: Desktop QE
URL:
Whiteboard: abrt_hash:2369bb13aef59cce0502ddc7f5e...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-04-14 12:54 UTC by Jiri Koten
Modified: 2014-01-02 10:48 UTC (History)
3 users (show)

Fixed In Version: evolution-data-server-2.32.3-5.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-11-21 05:01:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
File: event_log (4.28 KB, text/plain)
2011-04-14 12:54 UTC, Jiri Koten
no flags Details
File: backtrace (21.69 KB, text/plain)
2011-04-14 12:54 UTC, Jiri Koten
no flags Details
File: smaps (172.91 KB, text/plain)
2011-04-14 12:54 UTC, Jiri Koten
no flags Details
File: dsos (32.85 KB, text/plain)
2011-04-14 12:54 UTC, Jiri Koten
no flags Details
File: maps (42.98 KB, text/plain)
2011-04-14 12:54 UTC, Jiri Koten
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:1540 0 normal SHIPPED_LIVE Low: evolution security, bug fix, and enhancement update 2013-11-21 00:40:51 UTC

Description Jiri Koten 2011-04-14 12:54:16 UTC
abrt version: 2.0.0
uid: 500
package: evolution-data-server-2.28.3-15.el6
architecture: x86_64
executable: /usr/libexec/evolution-data-server-2.28
time: 1302781512
kernel: 2.6.32-131.0.1.el6.x86_64
username: test6
cmdline: /usr/libexec/evolution-data-server-2.28 --oaf-activate-iid=OAFIID:GNOME_Evolution_DataServer_CalFactory:1.2 --oaf-ior-fd=25
reason: Process /usr/libexec/evolution-data-server-2.28 was killed by signal 11 (SIGSEGV)
os_release: Red Hat Enterprise Linux Workstation release 6.1 Beta (Santiago)
component: evolution-data-server

Text file: event_log, 4385 bytes
Text file: backtrace, 22213 bytes
Text file: smaps, 177063 bytes
Text file: dsos, 33636 bytes
Text file: maps, 44007 bytes
Binary file: coredump, 36204544 bytes

comment
-----
Crash happend when I click on Clock applet and calendar was displayed.
In evolution I have 2 Webcal/iCal calendars.

comment~
-----
Crash happend when I click on Clock applet and It displayed calendar.
In evolution I have 2 Webcal/iCal calendars.

Comment 1 Jiri Koten 2011-04-14 12:54:18 UTC
Created attachment 492091 [details]
File: event_log

Comment 2 Jiri Koten 2011-04-14 12:54:20 UTC
Created attachment 492092 [details]
File: backtrace

Comment 3 Jiri Koten 2011-04-14 12:54:23 UTC
Created attachment 492093 [details]
File: smaps

Comment 4 Jiri Koten 2011-04-14 12:54:26 UTC
Created attachment 492094 [details]
File: dsos

Comment 5 Jiri Koten 2011-04-14 12:54:28 UTC
Created attachment 492095 [details]
File: maps

Comment 6 Milan Crha 2011-05-02 07:30:01 UTC
Place of the issue from the backtrace:

Thread 1 (Thread 0x7fd1a7f537c0 (LWP 7674)):
#0  __strlen_sse2 () at ../sysdeps/x86_64/strlen.S:31
#1  0x00000038e0c58fc2 in IA__g_strdup (str=0x1 <Address 0x1 out of bounds>)
     at gstrfuncs.c:101
#2  0x00007fd1a4d36df1 in retrieval_done (session=<value optimized out>,
     msg=<value optimized out>, cbhttp=0x7fd1940068a0 [ECalBackendHttp]) 
     at e-cal-backend-http.c:458
        uid = 0x1 <Address 0x1 out of bounds>
        comp = 0x7fd1940044a0 [ECalComponent]
#3  0x00000033b2c36e8d in final_finished (req=0x19d7900 [SoupMessage], 
        user_data=0x19d88c0) at soup-session-async.c:383

452 comps_in_cache = e_cal_backend_store_get_components (priv->store);
453 while (comps_in_cache != NULL) {
454    const gchar *uid;
455    ECalComponent *comp = comps_in_cache->data;
456
457    e_cal_component_get_uid (comp, &uid);
458    g_hash_table_insert (old_cache, g_strdup (uid), e_cal_c...ring (comp));
459
460    comps_in_cache = g_slist_remove (comps_in_cache, comps_in_cache->data);
461    g_object_unref (comp);
462 }

Thanks for a bug report. I cannot reproduce it myself, and the backtrace doesn't show any thread-interleaving, which I would expect being behind this issue, thus I'm asking for some kind of a reproducer.

This could be either some memory corruption, or a use-after-free issue, as I understand it. The reproducer surely depends on a local cache for the On The Web calendar and changes received from the server. The stored component has an invalid memory pointer for its UID property, which causes the crash. It might be also easier to reproduce this under valgrind (by providing a valgrind trace of the memory issue), though if this is also timing-related, then the valgrind memory checking can avoid the issue. The valgrind command may look like this:
   $ G_SLICE=always-malloc valgrind --num-callers=50 \
        /usr/libexec/evolution-data-server-2.28 &>log.txt
(make sure you'll run it while there is no other evolution-data-server running).

Your local calendar cache is stored at .evolution/cache/calendar/, which might be better to copy somewhere for later testing (to have an initial state of the cache and exact changes from the server and see whether with them the issue will be reproducible or not). I expect that some events were removed on the update, and then the component was kept in the case by a mistake or it was unreferenced incorrectly. Hard to tell at the moment.

Comment 7 RHEL Program Management 2011-07-06 00:22:26 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.

Comment 8 Milan Crha 2013-05-07 08:02:10 UTC
Jiri, did you see this with the latest evolution-data-server version, please?

Comment 9 Jiri Koten 2013-05-09 10:13:02 UTC
I saw segfault of eds in dmesg when I logged into rhel6 after a month without usage. Unfortunately Abrt discarded the backtrace for whatever reason, therefor I can't compare the backtraces.

IIRC I saw same segfault also during rhel6.4 testing phase. As You suggested it seems to depend on "special state" of the cache and probably lots of changes on the server side - that's why I hit it after a long time without regular usage.

Another problem is that after initial crash, eds re-spawns and work fine going forward, I cannot reproduce on purpose, e.g. kill eds, make some changes in Zimbra, start eds again, go to Calendar applet - no crash, events are correctly populated.

Is there a way how to bypass the eds autostart during session start, i.e. that it starts in valgrind?

Also this bug remains low priority, from user point calendar works fine after initial crash, which he is able to spot only if Abrt caches it.

Comment 10 Milan Crha 2013-05-09 13:18:16 UTC
(In reply to comment #9)
> Is there a way how to bypass the eds autostart during session start, i.e.
> that it starts in valgrind?

You can disable evolution-alarm-notify from a session start, or better replace /usr/libexec/evolution-data-server-2.28 with a script, which will look like this:
   #!/bin/bash
   G_SLICE=always-malloc valgrind --num-callers=50 /usr/libexec/evolution-data-server-2.28.orig &>/tmp/eds-`date +%Y%m%d-%H%M%S`-log.txt

(here I renamed the original binary to a .orig file).

> Also this bug remains low priority, from user point calendar works fine
> after initial crash, which he is able to spot only if Abrt caches it.

Hehe, ok :)

An upstream bug [1] looks related.

[1] https://bugzilla.gnome.org/show_bug.cgi?id=662068

Comment 12 Milan Crha 2013-06-13 16:07:18 UTC
I added patch to evolution-data-server-2.32.3-5, which contains multiple fixes, which should, together, address this bug report.

The added upstream commits are:
.   Avoid crash in e-cal-backend-http.c:webcal_to_http_method
.   7d00b444443233ced4629a32749e7e5085c61720
.
.   Bug 662068 - Crash in e-cal-backend-http.c:retrieval_done
.   608fae262c7421257ef1a4d5b62724b2e24d40d5
.
.   Fix a memory leak
.   Inspired by commit 66c2c5c1fc045d0c2b81554f91f95ac0edb30d18
.
.   Fix an issue found by Coverity Scan
.   b9ad01ba73afb402f194c2f66a90432e47003091

Comment 20 errata-xmlrpc 2013-11-21 05:01:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-1540.html


Note You need to log in before you can comment on or make changes to this bug.