Bug 1029248

Summary: [abrt] evolution-3.10.1-1.fc20: e_table_header_get_column_by_spec: Process /usr/bin/evolution was killed by signal 11 (SIGSEGV)
Product: [Fedora] Fedora Reporter: Flóki Pálsson <flokip>
Component: evolutionAssignee: Matthew Barnes <mbarnes>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 20CC: flokip, lucilanga, mbarnes, mcrha
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
URL: https://retrace.fedoraproject.org/faf/reports/bthash/48f3eabc936c6a18a92f4a8d56da4ba7ee0ada0f
Whiteboard: abrt_hash:61a9366bc3c7110fba4ac67db1fb11128d740ffc
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-15 19:15:51 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
File: backtrace
none
File: cgroup
none
File: core_backtrace
none
File: dso_list
none
File: environ
none
File: exploitable
none
File: limits
none
File: maps
none
File: open_fds
none
File: proc_pid_status
none
File: var_log_messages none

Description Flóki Pálsson 2013-11-12 00:34:13 UTC
Description of problem:
starting evolution

Version-Release number of selected component:
evolution-3.10.1-1.fc20

Additional info:
reporter:       libreport-2.1.9
backtrace_rating: 4
cmdline:        evolution
crash_function: e_table_header_get_column_by_spec
executable:     /usr/bin/evolution
kernel:         3.11.7-300.fc20.x86_64
runlevel:       N 5
type:           CCpp
uid:            1000

Truncated backtrace:
Thread no. 1 (7 frames)
 #0 e_table_header_get_column_by_spec at e-table-header.c:487
 #1 ml_sort_uids_by_tree at message-list.c:5216
 #2 message_list_regen_thread at message-list.c:5448
 #3 run_in_thread at gsimpleasyncresult.c:871
 #4 io_job_thread at gioscheduler.c:89
 #5 g_task_thread_pool_thread at gtask.c:1245
 #7 g_thread_proxy at gthread.c:798

Comment 1 Flóki Pálsson 2013-11-12 00:34:20 UTC
Created attachment 822698 [details]
File: backtrace

Comment 2 Flóki Pálsson 2013-11-12 00:34:24 UTC
Created attachment 822699 [details]
File: cgroup

Comment 3 Flóki Pálsson 2013-11-12 00:34:27 UTC
Created attachment 822700 [details]
File: core_backtrace

Comment 4 Flóki Pálsson 2013-11-12 00:34:31 UTC
Created attachment 822701 [details]
File: dso_list

Comment 5 Flóki Pálsson 2013-11-12 00:34:34 UTC
Created attachment 822702 [details]
File: environ

Comment 6 Flóki Pálsson 2013-11-12 00:34:38 UTC
Created attachment 822703 [details]
File: exploitable

Comment 7 Flóki Pálsson 2013-11-12 00:34:41 UTC
Created attachment 822704 [details]
File: limits

Comment 8 Flóki Pálsson 2013-11-12 00:34:45 UTC
Created attachment 822705 [details]
File: maps

Comment 9 Flóki Pálsson 2013-11-12 00:34:48 UTC
Created attachment 822706 [details]
File: open_fds

Comment 10 Flóki Pálsson 2013-11-12 00:34:51 UTC
Created attachment 822707 [details]
File: proc_pid_status

Comment 11 Flóki Pálsson 2013-11-12 00:34:54 UTC
Created attachment 822708 [details]
File: var_log_messages

Comment 12 Milan Crha 2013-11-13 11:24:58 UTC
Thanks for a bug report. Does this crash each evolution start for you, or it happened only once? It may also depend in which folder you stay, or enter. The code tries to apply some sorting you've setup on the folder, and it fails when getting the set data column, according to the backtrace.

Comment 13 Flóki Pálsson 2013-11-13 18:59:18 UTC
(In reply to Milan Crha from comment #12)
> Thanks for a bug report. Does this crash each evolution start for you, or it
> happened only once? It may also depend in which folder you stay, or enter.
> The code tries to apply some sorting you've setup on the folder, and it
> fails when getting the set data column, according to the backtrace.

I have only senn this oence.  I can not repeat.  I was updateing whit yum just before.

Comment 14 Milan Crha 2013-11-15 19:15:51 UTC
Thanks for the update. Good and pity at the same time that it happened to you only once. I think it is possible that it's more or less related to the update in the background. Let's close this for now, but please feel free to reopen or simply file new bugs, if/when you'll find any.