Bug 1380266

Summary: [abrt] evolution-data-server: e_soap_response_from_xmldoc(): evolution-calendar-factory-subprocess killed by SIGSEGV
Product: [Fedora] Fedora Reporter: Krzysztof Troska <elleander86>
Component: evolution-data-serverAssignee: Milan Crha <mcrha>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 24CC: mbarnes, mcrha
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
URL: https://retrace.fedoraproject.org/faf/reports/bthash/31f329c835293c9799b3f13f569602cd81b2a356
Whiteboard: abrt_hash:ef81c66ee8a09e6929063b534cfca1bf34f53672;VARIANT_ID=workstation;
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-10-06 10:36:46 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
File: backtrace
none
File: cgroup
none
File: core_backtrace
none
File: dso_list
none
File: environ
none
File: exploitable
none
File: limits
none
File: maps
none
File: mountinfo
none
File: namespaces
none
File: open_fds
none
File: proc_pid_status
none
File: var_log_messages none

Description Krzysztof Troska 2016-09-29 07:04:55 UTC
Version-Release number of selected component:
evolution-data-server-3.20.5-3.fc24

Additional info:
reporter:       libreport-2.7.2
backtrace_rating: 4
cmdline:        /usr/libexec/evolution-calendar-factory-subprocess --factory ews --bus-name org.gnome.evolution.dataserver.Subprocess.Backend.Calendarx2487x2 --own-path /org/gnome/evolution/dataserver/Subprocess/Backend/Calendar/2487/2
crash_function: e_soap_response_from_xmldoc
executable:     /usr/libexec/evolution-calendar-factory-subprocess
global_pid:     2547
kernel:         4.7.3-200.fc24.x86_64
pkg_fingerprint: 73BD E983 81B4 6521
pkg_vendor:     Fedora Project
runlevel:       N 5
type:           CCpp
uid:            1000

Truncated backtrace:
Thread no. 1 (9 frames)
 #0 e_soap_response_from_xmldoc at e-soap-response.c:233
 #1 e_soap_response_new_from_xmldoc at e-soap-response.c:142
 #2 e_soap_message_parse_response at e-soap-message.c:1179
 #3 ews_response_cb at e-ews-connection.c:804
 #4 soup_session_process_queue_item at soup-session.c:2056
 #5 async_run_queue at soup-session.c:2095
 #6 idle_run_queue at soup-session.c:2129
 #11 e_ews_soup_thread at e-ews-connection.c:1732
 #12 g_thread_proxy at gthread.c:780

Potential duplicate: bug 1215317

Comment 1 Krzysztof Troska 2016-09-29 07:05:01 UTC
Created attachment 1205819 [details]
File: backtrace

Comment 2 Krzysztof Troska 2016-09-29 07:05:03 UTC
Created attachment 1205820 [details]
File: cgroup

Comment 3 Krzysztof Troska 2016-09-29 07:05:04 UTC
Created attachment 1205821 [details]
File: core_backtrace

Comment 4 Krzysztof Troska 2016-09-29 07:05:06 UTC
Created attachment 1205822 [details]
File: dso_list

Comment 5 Krzysztof Troska 2016-09-29 07:05:08 UTC
Created attachment 1205823 [details]
File: environ

Comment 6 Krzysztof Troska 2016-09-29 07:05:09 UTC
Created attachment 1205824 [details]
File: exploitable

Comment 7 Krzysztof Troska 2016-09-29 07:05:11 UTC
Created attachment 1205825 [details]
File: limits

Comment 8 Krzysztof Troska 2016-09-29 07:05:13 UTC
Created attachment 1205826 [details]
File: maps

Comment 9 Krzysztof Troska 2016-09-29 07:05:14 UTC
Created attachment 1205827 [details]
File: mountinfo

Comment 10 Krzysztof Troska 2016-09-29 07:05:16 UTC
Created attachment 1205828 [details]
File: namespaces

Comment 11 Krzysztof Troska 2016-09-29 07:05:17 UTC
Created attachment 1205829 [details]
File: open_fds

Comment 12 Krzysztof Troska 2016-09-29 07:05:19 UTC
Created attachment 1205830 [details]
File: proc_pid_status

Comment 13 Krzysztof Troska 2016-09-29 07:05:20 UTC
Created attachment 1205831 [details]
File: var_log_messages

Comment 14 Milan Crha 2016-10-05 11:01:41 UTC
Thanks for a bug report. I see ABRT found a possible already filled bug #1215317, which looks very similar. The problem with it is that the bug report has no real resolution. I see from the backtrace that this crashed in the calendar factory when serving one of your evolution-ews calendars, but nothing more. If you'd have any insight, any detail about what was happening with the machine, the connection, or any other detail, then it'll be helpful. I do not recall seeing this myself in the past, though my EWS account doesn't have much activity.

Comment 15 Krzysztof Troska 2016-10-06 07:19:24 UTC
Nope sorry no idea, I was trying to brake it but it just won't brake in a way i can reproduce. It just randomly pop up in my problem reporting and lately I don't even have this. 

My only idea is that office 365 (which my company use for EWS accounts) is returning some weird data. And yes it happens sometimes that i go the web based client and it shows random errors, maybe not enough sanity check in soap responses?

Comment 16 Milan Crha 2016-10-06 10:36:46 UTC
Thanks for the update. It looks to me like some sort of use-after-free, because the related code is this:

229	if (xml_body != NULL) {
230		if (strcmp ((const gchar *) xml_body->name, "Header") == 0) {
231			/* read header parameters */
232			parse_parameters (response, xml_body);
233			xml_body = soup_xml_real_node (xml_body->next);
234		}

where the place of the crash is line 233, which dereferences xml_body (by accessing its xml_body->next member). The backtrace shows that the xml_body is NULL, thus, if the gdb is correct, then the crash doesn't make sense, because one can get on line 233 only if the xml_body isn't NULL, which is tested on line 229, thus, from my point of view, something wrote somewhere where it shouldn't and it stroke back here. Such issue can strike in (semi-)random places, depending on the actual memory content.

The tools like valgrind can help to identify such issues, but their side effect is also significantly slower run (due to all the memory checking), thus also a change in timing, which can prevent issues which depend on "proper timing".

I'm closing this for now, but feel free to update here, if you find anything interesting, or simply ask in case you'd like to help with something (no need to reopen the bug report, I receive notifications for closed bugs too).