Bug 175777 - [RHEL3] Vfolder creation, Application "evolution" (process 13367) has crashed due to a fatal error. (Segmentation fault)
[RHEL3] Vfolder creation, Application "evolution" (process 13367) has crashed...
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: evolution (Show other bugs)
3.0
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Matthew Barnes
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-12-14 16:04 EST by Suzanne Hillman
Modified: 2007-11-30 17:07 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-10-03 10:41:57 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Suzanne Hillman 2005-12-14 16:04:30 EST
Description of problem:
A specific vfolder creation causes Application "evolution" (process 13367) has
crashed due to a fatal error. (Segmentation fault)

Version-Release number of selected component (if applicable):
evolution-1.4.5-17

How reproducible:
Always

Steps to Reproduce:
1. Start Evolution
2. Create a Vfolder, named 'myself', with 'sender contains' 'zebra' (my username
for POP and IMAP), and sources being all local and active remote folders.
3. Hit 'Ok'.
  
Actual results:
Application "evolution" (process 13367) has crashed due to a fatal error.
(Segmentation fault)

Expected results:
No seg fault!

Additional info:
Also happens if I pick 'all local folders'. Doesn't seem to matter what I call
the Vfolder, either (tried with 'testing' as well). 'all active remove folders'
did not cause it. And it doesn't happen if I'm not basing it on 'zebra' ('is'
also does it, with zebra).

Seems to be specific to zebra as the match term, and to local folders.
Comment 1 Suzanne Hillman 2005-12-14 16:15:08 EST
(from gdb)

(gdb) t a a bt
 
Thread 5 (Thread 1115699568 (LWP 13983)):
#0  0x0000002a98972439 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/tls/libpthread.so.0
#1  0x0000002a95b0aa25 in e_msgport_wait (mp=0x2a9f32ff30) at e-msgport.c:305
#2  0x0000002a95b0b295 in thread_dispatch (din=0x2a9f32ffa4) at e-msgport.c:665
#3  0x0000002a9896fc64 in start_thread () from /lib64/tls/libpthread.so.0
#4  0x0000002a993dc243 in thread_start () from /lib64/tls/libc.so.6
#5  0x0000000000000000 in ?? ()
 
Thread 4 (Thread 1105209712 (LWP 13982)):
#0  0x0000002a98972439 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/tls/libpthread.so.0
#1  0x0000002a95b0aa25 in e_msgport_wait (mp=0x650e70) at e-msgport.c:305
#2  0x0000002a95b0b295 in thread_dispatch (din=0x64a6b4) at e-msgport.c:665
#3  0x0000002a9896fc64 in start_thread () from /lib64/tls/libpthread.so.0
#4  0x0000002a993dc243 in thread_start () from /lib64/tls/libc.so.6
#5  0x0000000000000000 in ?? ()
 
Thread 3 (Thread 1094719856 (LWP 13981)):
#0  0x0000002a991ad6bd in g_hash_table_foreach ()
   from /usr/lib64/libglib-2.0.so.0
#1  0x0000002a9dba3e01 in vee_folder_build_folder (vf=0x6aba38,
    source=0x6bcb90, ex=0x0) at camel-vee-folder.c:1156
#2  0x0000002a9dba297c in camel_vee_folder_set_folders (vf=0x6aba38,
    folders=0x0) at camel-vee-folder.c:501
#3  0x0000002a9d9da340 in vfolder_setup_do (mm=0x2a9f32a4eb)
    at mail-vfolder.c:127
#4  0x0000002a9d9cf481 in mail_msg_received (e=0x2a9f32a4eb, msg=0xbe3080,
    data=0x0) at mail-mt.c:503
#5  0x0000002a95b0b0e2 in thread_received_msg (e=0x651ab0, m=0xbe3080)
    at e-msgport.c:617
#6  0x0000002a95b0b20e in thread_dispatch (din=0x2a9f32a4eb) at e-msgport.c:698
#7  0x0000002a9896fc64 in start_thread () from /lib64/tls/libpthread.so.0
#8  0x0000002a993dc243 in thread_start () from /lib64/tls/libc.so.6
#9  0x0000000000000000 in ?? ()
 
---Type <return> to continue, or q <return> to quit---
Thread 2 (Thread 1084230000 (LWP 13980)):
#0  0x0000002a98972439 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/tls/libpthread.so.0
#1  0x0000002a95b0aa25 in e_msgport_wait (mp=0x650e70) at e-msgport.c:305
#2  0x0000002a95b0b295 in thread_dispatch (din=0x64a6b4) at e-msgport.c:665
#3  0x0000002a9896fc64 in start_thread () from /lib64/tls/libpthread.so.0
#4  0x0000002a993dc243 in thread_start () from /lib64/tls/libc.so.6
#5  0x0000000000000000 in ?? ()
 
Thread 1 (Thread 182989934432 (LWP 13973)):
#0  0x0000002a993d3f8c in poll () from /lib64/tls/libc.so.6
#1  0x0000002a991b77cd in g_main_loop_get_context ()
   from /usr/lib64/libglib-2.0.so.0
#2  0x0000002a991b6c9a in g_main_context_dispatch ()
   from /usr/lib64/libglib-2.0.so.0
#3  0x0000002a991b739a in g_main_loop_run () from /usr/lib64/libglib-2.0.so.0
#4  0x0000002a96bfebcb in bonobo_main () from /usr/lib64/libbonobo-2.so.0
#5  0x0000000000451634 in main (argc=0, argv=0x7fbfff83f8) at main.c:637
(gdb)
(gdb) bt
#0  0x0000002a991ad6bd in g_hash_table_foreach ()
   from /usr/lib64/libglib-2.0.so.0
#1  0x0000002a9dba3e01 in vee_folder_build_folder (vf=0x6aba38,
    source=0x6bcb90, ex=0x0) at camel-vee-folder.c:1156
#2  0x0000002a9dba297c in camel_vee_folder_set_folders (vf=0x6aba38,
    folders=0x0) at camel-vee-folder.c:501
#3  0x0000002a9d9da340 in vfolder_setup_do (mm=0x2a9f32a4eb)
    at mail-vfolder.c:127
#4  0x0000002a9d9cf481 in mail_msg_received (e=0x2a9f32a4eb, msg=0xbe3080,
    data=0x0) at mail-mt.c:503
#5  0x0000002a95b0b0e2 in thread_received_msg (e=0x651ab0, m=0xbe3080)
    at e-msgport.c:617
#6  0x0000002a95b0b20e in thread_dispatch (din=0x2a9f32a4eb) at e-msgport.c:698
#7  0x0000002a9896fc64 in start_thread () from /lib64/tls/libpthread.so.0
#8  0x0000002a993dc243 in thread_start () from /lib64/tls/libc.so.6
#9  0x0000000000000000 in ?? ()
(gdb)
Comment 2 Suzanne Hillman 2005-12-15 14:46:29 EST
At least based on an install of U6, this looks to be a regression. Of course,
for all I know, I did something obscure to cause it to happen in U7, which I
managed to avoid doing in U6. For now, setting to regression.
Comment 4 Dave Malcolm 2006-01-06 16:47:24 EST
Unable to reproduce on a fresh RHEL3 U7 Beta install on x86_64.
Also tried but was unable to reproduce on a fresh RHEL3 U6 install on x86_64.

I was able to reproduce this on the original machine (which I believe has now
been reinstalled).  From what I remember, when stepping through the
vee_folder_build_folder call, g_hash_table_foreach was crashing as it walked its
internal data; something seemed invalid.
Comment 5 Dave Malcolm 2006-01-06 19:35:27 EST
(Seems to help if you also create a POP account)

Crash is here:

void
g_hash_table_foreach (GHashTable *hash_table,
		      GHFunc	  func,
		      gpointer	  user_data)
{
  GHashNode *node;
  gint i;
  
  g_return_if_fail (hash_table != NULL);
  g_return_if_fail (func != NULL);
  
  for (i = 0; i < hash_table->size; i++)
    for (node = hash_table->nodes[i]; node; node = node->next)
      (* func) (node->key, node->value, user_data);
}

Dump of assembler code for function g_hash_table_foreach:
0x0000002a991ac670 <g_hash_table_foreach+0>:	push   %r14
0x0000002a991ac672 <g_hash_table_foreach+2>:	mov    %rdi,%r14
0x0000002a991ac675 <g_hash_table_foreach+5>:	push   %r13
0x0000002a991ac677 <g_hash_table_foreach+7>:	push   %r12
0x0000002a991ac679 <g_hash_table_foreach+9>:	mov    %rsi,%r12
0x0000002a991ac67c <g_hash_table_foreach+12>:	push   %rbp
0x0000002a991ac67d <g_hash_table_foreach+13>:	mov    %rdx,%rbp
0x0000002a991ac680 <g_hash_table_foreach+16>:	push   %rbx
0x0000002a991ac681 <g_hash_table_foreach+17>:	sub    $0x10,%rsp
0x0000002a991ac685 <g_hash_table_foreach+21>:	test   %rdi,%rdi
0x0000002a991ac688 <g_hash_table_foreach+24>:	je     0x2a991ac719
<g_hash_table_foreach+169>
0x0000002a991ac68e <g_hash_table_foreach+30>:	test   %rsi,%rsi
0x0000002a991ac691 <g_hash_table_foreach+33>:	je     0x2a991ac6de
<g_hash_table_foreach+110>
0x0000002a991ac693 <g_hash_table_foreach+35>:	mov    (%rdi),%ecx
0x0000002a991ac695 <g_hash_table_foreach+37>:	xor    %r13d,%r13d
0x0000002a991ac698 <g_hash_table_foreach+40>:	cmp    %ecx,%r13d
0x0000002a991ac69b <g_hash_table_foreach+43>:	jge    0x2a991ac6d1
<g_hash_table_foreach+97>
0x0000002a991ac69d <g_hash_table_foreach+45>:	data16
0x0000002a991ac69e <g_hash_table_foreach+46>:	data16
0x0000002a991ac69f <g_hash_table_foreach+47>:	nop    
0x0000002a991ac6a0 <g_hash_table_foreach+48>:	mov    0x8(%r14),%rax
0x0000002a991ac6a4 <g_hash_table_foreach+52>:	movslq %r13d,%rdx
0x0000002a991ac6a7 <g_hash_table_foreach+55>:	mov    (%rax,%rdx,8),%rbx
0x0000002a991ac6ab <g_hash_table_foreach+59>:	test   %rbx,%rbx
0x0000002a991ac6ae <g_hash_table_foreach+62>:	je     0x2a991ac6c9
<g_hash_table_foreach+89>
0x0000002a991ac6b0 <g_hash_table_foreach+64>:	mov    0x8(%rbx),%rsi
0x0000002a991ac6b4 <g_hash_table_foreach+68>:	mov    (%rbx),%rdi
0x0000002a991ac6b7 <g_hash_table_foreach+71>:	mov    %rbp,%rdx
0x0000002a991ac6ba <g_hash_table_foreach+74>:	callq  *%r12d
0x0000002a991ac6bd <g_hash_table_foreach+77>:	mov    0x10(%rbx),%rbx
0x0000002a991ac6c1 <g_hash_table_foreach+81>:	test   %rbx,%rbx
0x0000002a991ac6c4 <g_hash_table_foreach+84>:	jne    0x2a991ac6b0
<g_hash_table_foreach+64>
0x0000002a991ac6c6 <g_hash_table_foreach+86>:	mov    (%r14),%ecx
0x0000002a991ac6c9 <g_hash_table_foreach+89>:	inc    %r13d
0x0000002a991ac6cc <g_hash_table_foreach+92>:	cmp    %ecx,%r13d
0x0000002a991ac6cf <g_hash_table_foreach+95>:	jl     0x2a991ac6a0
<g_hash_table_foreach+48>
0x0000002a991ac6d1 <g_hash_table_foreach+97>:	add    $0x10,%rsp
0x0000002a991ac6d5 <g_hash_table_foreach+101>:	pop    %rbx
0x0000002a991ac6d6 <g_hash_table_foreach+102>:	pop    %rbp
0x0000002a991ac6d7 <g_hash_table_foreach+103>:	pop    %r12
0x0000002a991ac6d9 <g_hash_table_foreach+105>:	pop    %r13
0x0000002a991ac6db <g_hash_table_foreach+107>:	pop    %r14
0x0000002a991ac6dd <g_hash_table_foreach+109>:	retq   
0x0000002a991ac6de <g_hash_table_foreach+110>:	lea    166887(%rip),%rax        #
0x2a991d52cc
0x0000002a991ac6e5 <g_hash_table_foreach+117>:	lea    172763(%rip),%r9        #
0x2a991d69c7 <days_in_year+1575>
0x0000002a991ac6ec <g_hash_table_foreach+124>:	mov    $0x22b,%r8d
0x0000002a991ac6f2 <g_hash_table_foreach+130>:	mov    %rax,(%rsp)
0x0000002a991ac6f6 <g_hash_table_foreach+134>:	lea    172554(%rip),%rcx        #
0x2a991d6907 <days_in_year+1383>
0x0000002a991ac6fd <g_hash_table_foreach+141>:	lea    165564(%rip),%rdx        #
0x2a991d4dc0
0x0000002a991ac704 <g_hash_table_foreach+148>:	mov    $0x8,%esi
0x0000002a991ac709 <g_hash_table_foreach+153>:	lea    165244(%rip),%rdi        #
0x2a991d4c8c
0x0000002a991ac710 <g_hash_table_foreach+160>:	xor    %eax,%eax
0x0000002a991ac712 <g_hash_table_foreach+162>:	callq  0x2a991a0560
0x0000002a991ac717 <g_hash_table_foreach+167>:	jmp    0x2a991ac6d1
<g_hash_table_foreach+97>
0x0000002a991ac719 <g_hash_table_foreach+169>:	lea    172479(%rip),%rax        #
0x2a991d68df <days_in_year+1343>
0x0000002a991ac720 <g_hash_table_foreach+176>:	lea    172704(%rip),%r9        #
0x2a991d69c7 <days_in_year+1575>
0x0000002a991ac727 <g_hash_table_foreach+183>:	mov    $0x22a,%r8d
0x0000002a991ac72d <g_hash_table_foreach+189>:	mov    %rax,(%rsp)
0x0000002a991ac731 <g_hash_table_foreach+193>:	jmp    0x2a991ac6f6
<g_hash_table_foreach+134>

(gdb) p $pc
$37 = (void (*)()) 0x2a991ac6bd <g_hash_table_foreach+77>
(gdb) p $rbx
$38 = 0
(gdb) p hash_table->nodes[i]
$40 = (GHashNode *) 0x79f258
(gdb) p *hash_table->nodes[i]
$42 = {key = 0x2a9f24aea9, value = 0x1, next = 0x0}
(gdb) p node
$43 = (GHashNode *) 0x0

(Seems to be doing the node=node->next without testing that node is non-NULL first)
glib2-2.2.3-2.0
evolution-1.4.5-17
Comment 6 Dave Malcolm 2006-01-06 19:40:43 EST
No, it's crashing immediately after first call to the callback
(folder_added_uid); perhaps something in that callback is overwriting the hash
table data?
Comment 7 Dave Malcolm 2006-01-06 20:05:15 EST
Yes: the local variable "node" within g_hash_table_foreach becomes NULL
during/upon return from the call to folder_added_uid:

Breakpoint 2, folder_added_uid (uidin=0x2a9f266c59 "36", value=0x1, 
    u=0x91fe38) at camel-vee-folder.c:1052
(gdb) up
#1  0x0000002a991ac6bd in g_hash_table_foreach (hash_table=0x2a9f239830, 
    func=0x2a9da6dbe0 <folder_added_uid>, user_data=0x414016b0) at ghash.c:559
(gdb) p node
$44 = (GHashNode *) 0x91fe38
(gdb) down
#0  folder_added_uid (uidin=0x2a9f266c59 "36", value=0x1, u=0x91fe38)
    at camel-vee-folder.c:1052
(gdb) finish
Run till exit from #0  folder_added_uid (uidin=0x2a9f266c59 "36", value=0x1, 
    u=0x91fe38) at camel-vee-folder.c:1052
g_hash_table_foreach (hash_table=0x2a9f239830, 
    func=0x2a9da6dbe0 <folder_added_uid>, user_data=0x414016b0) at ghash.c:558
(gdb) p node
$45 = (GHashNode *) 0x0
(gdb) step

Program received signal SIGSEGV, Segmentation fault.
g_hash_table_foreach (hash_table=0x2a9f239830, 
    func=0x2a9da6dbe0 <folder_added_uid>, user_data=0x414016b0) at ghash.c:558
(gdb) 
Comment 8 Dave Malcolm 2006-01-09 20:22:49 EST
Reproduced with a fresh install of RHEL3 U6 on x86_64, so I believe this is not
a regression.
Comment 9 Dave Malcolm 2006-01-10 16:46:51 EST
I'm unable to install RHEL3 GOLD or RHEL3 U1 on my test machine, due to a "Your
CPU does not support long mode.  Use a 32bit distribution." error.

I was able to install RHEL3 U2 on my test machine, and was able to successfully
reproduce this bug.  So this is not a regression; marking accordingly

My recipe for reliably reproducing the bug:
- set up a test user, and run evolution
- set up both an IMAP account and a POP account
- create the vfolder described above
- delete the vfolder described above
- re-create the vfolder described above; upon clicking OK in the vfolder
creating dialog evolution crashes.
Comment 11 RHEL Product and Program Management 2007-10-03 10:41:57 EDT
Development Management has reviewed and declined this request.  You may appeal
this decision by reopening this request. 

Note You need to log in before you can comment on or make changes to this bug.