Bug 1286839

Summary: [abrt] evolution: free_message_info_data(): evolution killed by SIGSEGV
Product: [Fedora] Fedora Reporter: Samuel Sieb <samuel-rhbugs>
Component: evolutionAssignee: Milan Crha <mcrha>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 23CC: lucilanga, mbarnes, mcrha, samuel-rhbugs, tpopela
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
URL: https://retrace.fedoraproject.org/faf/reports/bthash/5d06472d0fffdbd5d2ab6a8424fda054b8b0fa15
Whiteboard: abrt_hash:ded73230a933b3b66576eef5be8c74ce03366942;
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-01-11 13:57:38 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
File: backtrace
none
File: cgroup
none
File: core_backtrace
none
File: dso_list
none
File: environ
none
File: exploitable
none
File: limits
none
File: maps
none
File: mountinfo
none
File: namespaces
none
File: open_fds
none
File: proc_pid_status none

Description Samuel Sieb 2015-11-30 20:59:25 UTC
Description of problem:
It happened when I closed an email message I was reading.

Version-Release number of selected component:
evolution-3.18.2-1.fc23

Additional info:
reporter:       libreport-2.6.3
backtrace_rating: 4
cmdline:        evolution
crash_function: free_message_info_data
executable:     /usr/bin/evolution
global_pid:     12619
kernel:         4.2.6-300.fc23.x86_64
runlevel:       N 5
type:           CCpp
uid:            1000

Truncated backtrace:
Thread no. 11 (8 frames)
 #0 free_message_info_data at message-list.c:5378
 #1 g_hash_table_foreach at ghash.c:1607
 #2 ml_sort_uids_by_tree at message-list.c:5493
 #3 message_list_regen_thread at message-list.c:5680
 #4 run_in_thread at gsimpleasyncresult.c:898
 #5 io_job_thread at gioscheduler.c:85
 #6 g_task_thread_pool_thread at gtask.c:1287
 #8 g_thread_proxy at gthread.c:778

Comment 1 Samuel Sieb 2015-11-30 20:59:30 UTC
Created attachment 1100629 [details]
File: backtrace

Comment 2 Samuel Sieb 2015-11-30 20:59:31 UTC
Created attachment 1100630 [details]
File: cgroup

Comment 3 Samuel Sieb 2015-11-30 20:59:33 UTC
Created attachment 1100631 [details]
File: core_backtrace

Comment 4 Samuel Sieb 2015-11-30 20:59:34 UTC
Created attachment 1100632 [details]
File: dso_list

Comment 5 Samuel Sieb 2015-11-30 20:59:35 UTC
Created attachment 1100633 [details]
File: environ

Comment 6 Samuel Sieb 2015-11-30 20:59:36 UTC
Created attachment 1100634 [details]
File: exploitable

Comment 7 Samuel Sieb 2015-11-30 20:59:37 UTC
Created attachment 1100635 [details]
File: limits

Comment 8 Samuel Sieb 2015-11-30 20:59:40 UTC
Created attachment 1100636 [details]
File: maps

Comment 9 Samuel Sieb 2015-11-30 20:59:41 UTC
Created attachment 1100637 [details]
File: mountinfo

Comment 10 Samuel Sieb 2015-11-30 20:59:42 UTC
Created attachment 1100638 [details]
File: namespaces

Comment 11 Samuel Sieb 2015-11-30 20:59:44 UTC
Created attachment 1100639 [details]
File: open_fds

Comment 12 Samuel Sieb 2015-11-30 20:59:45 UTC
Created attachment 1100640 [details]
File: proc_pid_status

Comment 13 Milan Crha 2015-12-01 07:32:08 UTC
Thanks for a bug report. I guess this is related to a bug #1281331. Could you update the evolution-data-server to the proposed one in that bug and report back, whether the crash still occurs, please?

Comment 14 Samuel Sieb 2015-12-22 07:46:11 UTC
Unfortunately (or fortunately?), I haven't had this happen since so I will have a hard time verifying it.  If you think it's related, then go ahead and close it.  In the unlikely case of it happening again, I'll reopen it.

Comment 15 Milan Crha 2016-01-11 13:57:38 UTC
Thanks for the update. I'm closing this for now, but as you said, in case of you manage to reproduce the crash, feel free to reopen this or file a new bug report. Thanks in advance.