Bug 854017 - Deadlock starting Evolution (mh-backend)
Deadlock starting Evolution (mh-backend)
Status: CLOSED ERRATA
Product: Fedora
Classification: Fedora
Component: evolution-data-server (Show other bugs)
17
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Matthew Barnes
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-03 11:01 EDT by Matt Davey
Modified: 2013-05-31 22:28 EDT (History)
5 users (show)

See Also:
Fixed In Version: evolution-data-server-3.4.4-5.fc17
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-05-13 13:13:16 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
gdb backtrace showing evolution backend deadlock. (20.24 KB, text/plain)
2012-09-03 11:01 EDT, Matt Davey
no flags Details
another backtrace of a lockup (11.15 KB, text/plain)
2013-05-08 05:02 EDT, Matt Davey
no flags Details
another gdb backtrace. (9.78 KB, text/plain)
2013-05-08 05:04 EDT, Matt Davey
no flags Details
eds patch (910 bytes, patch)
2013-05-13 13:07 EDT, Milan Crha
no flags Details | Diff

  None (edit)
Description Matt Davey 2012-09-03 11:01:03 EDT
Created attachment 609407 [details]
gdb backtrace showing evolution backend deadlock.

Description of problem:
I am getting a persistent deadlock when trying to start Evolution.  This was after an unclean shutdown. Backtrace attached.

Version-Release number of selected component (if applicable):
vbox-mcdavey 101 ~$ rpm -qa | grep evolution 
evolution-data-server-3.4.4-2.fc17.x86_64
evolution-debuginfo-3.4.4-1.fc17.x86_64
evolution-NetworkManager-3.4.3-2.fc17.x86_64
evolution-data-server-debuginfo-3.4.4-2.fc17.x86_64
evolution-3.4.3-2.fc17.x86_64

How reproducible:
Probably not easy to reproduce, but it is persistent for me at the moment.

Steps to Reproduce:
1. run evolution

Actual results:
Evolution window appears.
Status bar says "Updating Search Folders for 'mh' - inbox" and "Opening Folder 'inbox'".  The inbox fails to load.  Evolution cannot exit (hangs).

Expected results:
A functioning mail program :(

Additional info:
See attached gdb backtrace.
Thread 8: camel_store_get_folder_sync, camel_object_bag_reserve, g_cond_wait
Thread 7: summary_assign_uid, .., g_mutex_lock
Thread 6: summary_assign_uid, ... g_mutex_unlock

I've been persisting with Evo for a long time at this stage, and have usually been able to workaround these kind of bugs by deleting the *ibex.index* and losing all of my metadata, but even that isn't helping now.  lsof shows evo getting stuck opening the first file in the mh folder.

Any ideas for a workaround, please?
Comment 1 Milan Crha 2012-09-04 03:35:37 EDT
Thanks for a bug report. It seems to me that the UI is responsive, only quit doesn't happen, because of the activity in the status bar (basically, the application doesn't quit until the status bar is empty). I also see Thread 5 being busy populating your summary for inbox/support folder, maybe when that's done the other threads will have an opportunity to do their job?

Thread 7 is OK, it's waiting for a release of the lock help by Thread 5. Thread 4, 3, 2, and 8 are waiting for a finish of Thread 6, which is populating your inbox folder. Why is Thread 6 left in unlock I cannot tell, either it's just matter of luck, as you got the backtrace just within that operation, or something broke with pthread, GMutex or GObject, though it's quite unlikely.

Is CPU usage indicating any activity in evolution, which would suggest it's trying to finish the operation, or evolution is idle and the summary load broke? If it's after crash, then move away also folder.db file, not only text indexes. That way the mh provider will rebuild it from scratch. It would be good to not delete the file, only move it away, and then check with sqlite3, whether it can be opened and whether it's broken. You might also try to run some checking on the folder.db file, like:
   $ sqlite3 folders.db "PRAGMA integrity_check"
and probably vacuum it as well:
   $ sqlite3 folders.db "VACUUM"
Comment 2 Matt Davey 2012-09-06 11:38:03 EDT
Thanks Milan,

I deleted folders.db before seeing your reply, and evo has sprung back to life.

I'm still seeing Evolution consuming all available memory and getting killed by oom killer, so I'll add my comments and valgrind info to #842099.
Comment 3 Matt Davey 2012-09-18 10:57:44 EDT
After updating to evolution-data-server-3.4.4-3.fc17.x86_64 I've got a frozen UI.

Also getting a timeout from 'evolution -q'.

The status bar says "Opening folder inbox/support' and 'Updating search folders for inbox/support'.  I may have got the text a little wrong, because the UI is not redrawing now, so I can't see the status bar.

Here's my backtrace:

(gdb) thread apply all bt

Thread 98 (Thread 0x7fffd60d2700 (LWP 6781)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:166
#1  0x0000003e6da8397f in g_cond_wait (cond=0x7fffd0349270, mutex=<optimized out>) at gthread-posix.c:746
#2  0x000000354d4865ae in camel_object_bag_reserve (bag=0x7fffe42ce880, key=key@entry=0x1a7a120) at camel-object-bag.c:328
#3  0x000000354d4a879c in camel_store_get_folder_sync (store=0x771560 [CamelMhStore], folder_name=folder_name@entry=0x1a7a120 "inbox/support", flags=flags@entry=(unknown: 0), cancellable=cancellable@entry=0x7fffc400a0c0 [CamelOperation], error=error@entry=0x7fffd60d1b48) at camel-store.c:1795
#4  0x000000354d4a8d09 in store_get_folder_thread (simple=0x7fffc0008640 [GSimpleAsyncResult], object=0x771560 [CamelMhStore], cancellable=0x7fffc400a0c0 [CamelOperation]) at camel-store.c:430
#5  0x00000036e866ce9e in run_in_thread (job=<optimized out>, c=0x7fffc400a0c0 [CamelOperation], _data=0x2ef3930) at gsimpleasyncresult.c:861
#6  0x00000036e865c22e in io_job_thread (data=0x2ef3950, user_data=<optimized out>) at gioscheduler.c:177
#7  0x0000003e6da6ab02 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:309
#8  0x0000003e6da6a305 in g_thread_proxy (data=0x1a54000) at gthread.c:801
#9  0x0000003e6c207d14 in start_thread (arg=0x7fffd60d2700) at pthread_create.c:309
#10 0x0000003e6bef167d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 10 (Thread 0x7fffd7cdf700 (LWP 6646)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:166
#1  0x0000003e6da8397f in g_cond_wait (cond=0x7fffd0349270, mutex=<optimized out>) at gthread-posix.c:746
#2  0x000000354d4865ae in camel_object_bag_reserve (bag=0x7fffe42ce880, key=key@entry=0x7fffc839f090) at camel-object-bag.c:328
#3  0x000000354d4a879c in camel_store_get_folder_sync (store=0x771560 [CamelMhStore], folder_name=0x7fffc839f090 "inbox/support", flags=flags@entry=(unknown: 0), cancellable=cancellable@entry=0x1837aa0 [CamelOperation], error=error@entry=0x16d9b90) at camel-store.c:1795
#4  0x00007fffe9ebdc38 in e_mail_session_uri_to_folder_sync (session=0x65c6c0 [EMailUISession], folder_uri=<optimized out>, flags=(unknown: 0), cancellable=0x1837aa0 [CamelOperation], error=0x16d9b90) at e-mail-session.c:1913
#5  0x00007fffe9ec8eb6 in vfolder_adduri_exec (error=<optimized out>, cancellable=<optimized out>, m=<optimized out>) at mail-vfolder.c:249
#6  vfolder_adduri_exec (m=0x16d9b70, cancellable=0x1837aa0 [CamelOperation], error=0x16d9b90) at mail-vfolder.c:224
#7  0x000000354fc09ff7 in mail_msg_proxy (msg=0x16d9b70) at mail-mt.c:423
#8  0x0000003e6da6ab02 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:309
#9  0x0000003e6da6a305 in g_thread_proxy (data=0x16c5e30) at gthread.c:801
#10 0x0000003e6c207d14 in start_thread (arg=0x7fffd7cdf700) at pthread_create.c:309
#11 0x0000003e6bef167d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 7 (Thread 0x7fffd74de700 (LWP 6643)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:166
#1  0x0000003e6da8397f in g_cond_wait (cond=0x7fffd0349270, mutex=<optimized out>) at gthread-posix.c:746
#2  0x000000354d4865ae in camel_object_bag_reserve (bag=0x7fffe42ce880, key=key@entry=0x7fffcc00fa70) at camel-object-bag.c:328
#3  0x000000354d4a879c in camel_store_get_folder_sync (store=0x771560 [CamelMhStore], folder_name=0x7fffcc00fa70 "inbox/support", flags=flags@entry=(unknown: 0), cancellable=cancellable@entry=0x0, error=error@entry=0x0) at camel-store.c:1795
#4  0x00007fffe9ebdc38 in e_mail_session_uri_to_folder_sync (session=0x65c6c0 [EMailUISession], folder_uri=folder_uri@entry=0x7fffcc009d50 "email://1075307125.22499.0@sirocco.local.corvil.com/inbox/support", flags=flags@entry=(unknown: 0), cancellable=cancellable@entry=0x0, error=error@entry=0x0)
    at e-mail-session.c:1913
#5  0x00007fffea57d7c3 in receive_get_folder (error=0x0, data=0x1195800, uri=0x7fffcc009d50 "email://1075307125.22499.0@sirocco.local.corvil.com/inbox/support", d=<optimized out>) at mail-send-recv.c:956
#6  receive_get_folder (d=<optimized out>, uri=0x7fffcc009d50 "email://1075307125.22499.0@sirocco.local.corvil.com/inbox/support", data=0x1195800, error=0x0) at mail-send-recv.c:936
#7  0x000000354d448962 in open_folder (folder_url=0x7fffcc009d50 "email://1075307125.22499.0@sirocco.local.corvil.com/inbox/support", driver=0x1116290 [CamelFilterDriver]) at camel-filter-driver.c:1104
#8  open_folder (driver=0x1116290 [CamelFilterDriver], folder_url=0x7fffcc009d50 "email://1075307125.22499.0@sirocco.local.corvil.com/inbox/support") at camel-filter-driver.c:1089
#9  0x000000354d448a38 in do_move (f=<optimized out>, argc=1, argv=<optimized out>, driver=0x1116290 [CamelFilterDriver]) at camel-filter-driver.c:593
#10 0x000000354d49f800 in camel_sexp_term_eval (sexp=sexp@entry=0x118b040 [CamelSExp], term=0x7fffcc011950) at camel-sexp.c:812
#11 0x000000354d49f97f in term_eval_begin (sexp=0x118b040 [CamelSExp], argc=<optimized out>, argv=<optimized out>, data=<optimized out>) at camel-sexp.c:750
#12 0x000000354d49f8e3 in camel_sexp_term_eval (sexp=0x118b040 [CamelSExp], sexp@entry=0x7fffd043779c, term=0x7fffcc0117b0, term@entry=0x0) at camel-sexp.c:802
#13 0x000000354d4a0abd in camel_sexp_eval (sexp=0x118b040 [CamelSExp]) at camel-sexp.c:1730
#14 0x000000354d44a1b9 in camel_filter_driver_filter_message (driver=driver@entry=0x1116290 [CamelFilterDriver], message=<optimized out>, message@entry=0x0, info=0x7fffd03b8280, uid=0x7fffcc004f10 "0a2678049af35a948e3b37ecc1b66528", source=source@entry=0x17ae150 [CamelPOP3Folder], store_uid=store_uid@entry=
    0xdf1b90 "1075309126.23131.0@sirocco.local.corvil.com", original_store_uid=original_store_uid@entry=0xdf1b90 "1075309126.23131.0@sirocco.local.corvil.com", cancellable=cancellable@entry=0x1195400 [CamelOperation], error=error@entry=0x7fffd74ddab8) at camel-filter-driver.c:1709
#15 0x000000354d44a61f in camel_filter_driver_filter_folder (driver=0x1116290 [CamelFilterDriver], folder=folder@entry=0x17ae150 [CamelPOP3Folder], cache=0x7fffcc004bf0, uids=0x12d7b80, remove=1, cancellable=cancellable@entry=0x1195400 [CamelOperation], error=error@entry=0x11b0230) at camel-filter-driver.c:1493
#16 0x00007fffe9ec629e in em_filter_folder_element_exec (m=m@entry=0x11b0210, cancellable=cancellable@entry=0x1195400 [CamelOperation], error=error@entry=0x11b0230) at mail-ops.c:124
#17 0x00007fffe9ec65ae in fetch_mail_exec (m=0x11b0210, cancellable=0x1195400 [CamelOperation], error=0x11b0230) at mail-ops.c:329
#18 0x000000354fc09ff7 in mail_msg_proxy (msg=0x11b0210) at mail-mt.c:423
#19 0x0000003e6da6ab02 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:309
#20 0x0000003e6da6a305 in g_thread_proxy (data=0x12cc1e0) at gthread.c:801
#21 0x0000003e6c207d14 in start_thread (arg=0x7fffd74de700) at pthread_create.c:309
#22 0x0000003e6bef167d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 5 (Thread 0x7fffe0d76700 (LWP 6641)):
#0  0x0000003e6c20aa7f in __pthread_mutex_unlock_usercnt (mutex=0x7fffd041e0f0, decr=decr@entry=1) at pthread_mutex_unlock.c:53
#1  0x0000003e6c20aada in __pthread_mutex_unlock (mutex=<optimized out>) at pthread_mutex_unlock.c:298
#2  0x0000003e6da837f8 in g_rec_mutex_unlock (rec_mutex=<optimized out>) at gthread-posix.c:396
#3  0x0000003e6da1c666 in g_static_rec_mutex_unlock (mutex=<optimized out>) at deprecated/gthread-deprecated.c:762
#4  0x000000354d4552d6 in info_set_flags (info=0x7fffd03b80a0, flags=<optimized out>, set=<optimized out>) at camel-folder-summary.c:1103
#5  0x000000354d456061 in summary_assign_uid (summary=summary@entry=0x7fffc4007a40 [CamelMhSummary], info=info@entry=0x7fffd03b80a0) at camel-folder-summary.c:2811
#6  0x000000354d45647d in camel_folder_summary_info_new_from_parser (summary=summary@entry=0x7fffc4007a40 [CamelMhSummary], mp=mp@entry=0x7fffd0004f00 [CamelMimeParser]) at camel-folder-summary.c:3049
#7  0x000000354d456663 in camel_folder_summary_add_from_parser (summary=summary@entry=0x7fffc4007a40 [CamelMhSummary], mp=mp@entry=0x7fffd0004f00 [CamelMimeParser]) at camel-folder-summary.c:2955
#8  0x00007fffe3df06a9 in camel_mh_summary_add (forceindex=1, name=0x7fffd04430e3 "965", cls=0x7fffc4007a40 [CamelMhSummary], cancellable=<optimized out>) at camel-mh-summary.c:180
#9  mh_summary_check (cls=0x7fffc4007a40 [CamelMhSummary], changeinfo=<optimized out>, cancellable=<optimized out>, error=<optimized out>) at camel-mh-summary.c:267
#10 0x00007fffe3de5c7a in camel_local_folder_construct (lf=0x7fffd01eeb80 [CamelMhFolder], flags=flags@entry=0, cancellable=cancellable@entry=0x17c86a0 [CamelOperation], error=error@entry=0x7fffe0d75b78) at camel-local-folder.c:631
#11 0x00007fffe3dee947 in camel_mh_folder_new (parent_store=parent_store@entry=0x771560 [CamelMhStore], full_name=full_name@entry=0x7fffd0424060 "inbox/support", flags=flags@entry=0, cancellable=cancellable@entry=0x17c86a0 [CamelOperation], error=error@entry=0x7fffe0d75b78) at camel-mh-folder.c:249
#12 0x00007fffe3def670 in mh_store_get_folder_sync (store=0x771560 [CamelMhStore], folder_name=0x7fffd0424060 "inbox/support", flags=(unknown: 0), cancellable=0x17c86a0 [CamelOperation], error=0x7fffe0d75b78) at camel-mh-store.c:563
#13 0x000000354d4a88d7 in camel_store_get_folder_sync (store=0x771560 [CamelMhStore], folder_name=<optimized out>, flags=flags@entry=(unknown: 0), cancellable=cancellable@entry=0x17c86a0 [CamelOperation], error=error@entry=0x7fffe0d75b78) at camel-store.c:1857
#14 0x00007fffe9ebdc38 in e_mail_session_uri_to_folder_sync (session=0x65c6c0 [EMailUISession], folder_uri=folder_uri@entry=0x7fffd000a2c0 "folder://1075307125.22499.0%40sirocco.local.corvil.com/inbox/support", flags=flags@entry=(unknown: 0), cancellable=cancellable@entry=0x17c86a0 [CamelOperation], 
    error=error@entry=0x7fffe0d75b78) at e-mail-session.c:1913
#15 0x00007fffea57cd5d in refresh_folders_exec (m=0x16d9850, cancellable=0x17c86a0 [CamelOperation], error=<optimized out>) at mail-send-recv.c:1049
#16 0x000000354fc09ff7 in mail_msg_proxy (msg=0x16d9850) at mail-mt.c:423
#17 0x0000003e6da6ab02 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:309
#18 0x0000003e6da6a305 in g_thread_proxy (data=0xd8b540) at gthread.c:801
#19 0x0000003e6c207d14 in start_thread (arg=0x7fffe0d76700) at pthread_create.c:309
#20 0x0000003e6bef167d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 4 (Thread 0x7fffeb7fe700 (LWP 6640)):
#0  0x0000003e6bee8bcf in __GI___poll (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x0000003e6da47964 in g_main_context_poll (n_fds=5, fds=0x7fffdc006870, timeout=-1, context=0x7fffe400e030, priority=<optimized out>) at gmain.c:3440
#2  g_main_context_iterate (context=0x7fffe400e030, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3141
#3  0x0000003e6da47dc2 in g_main_loop_run (loop=0x7fffe400df50) at gmain.c:3340
#4  0x00000036e86c9466 in gdbus_shared_thread_func (user_data=0x7fffe400e000) at gdbusprivate.c:277
#5  0x0000003e6da6a305 in g_thread_proxy (data=0x7fffe400ba80) at gthread.c:801
#6  0x0000003e6c207d14 in start_thread (arg=0x7fffeb7fe700) at pthread_create.c:309
#7  0x0000003e6bef167d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
---Type <return> to continue, or q <return> to quit---

Thread 3 (Thread 0x7fffebfff700 (LWP 6639)):
#0  0x0000003e6bee8bcf in __GI___poll (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x0000003e6da47964 in g_main_context_poll (n_fds=1, fds=0x7fffe40010e0, timeout=-1, context=0x658210, priority=<optimized out>) at gmain.c:3440
#2  g_main_context_iterate (context=0x658210, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3141
#3  0x0000003e6da47dc2 in g_main_loop_run (loop=0x7fffe40010c0) at gmain.c:3340
#4  0x00007ffff03e3b0b in ?? () from /usr/lib64/gio/modules/libdconfsettings.so
#5  0x0000003e6da6a305 in g_thread_proxy (data=0x6998a0) at gthread.c:801
#6  0x0000003e6c207d14 in start_thread (arg=0x7fffebfff700) at pthread_create.c:309
#7  0x0000003e6bef167d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 2 (Thread 0x7ffff0e11700 (LWP 6638)):
#0  0x0000003e6bee8bcf in __GI___poll (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x0000003e6da47964 in g_main_context_poll (n_fds=1, fds=0x7fffec0008c0, timeout=-1, context=0x6de370, priority=<optimized out>) at gmain.c:3440
#2  g_main_context_iterate (context=context@entry=0x6de370, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3141
#3  0x0000003e6da47a84 in g_main_context_iteration (context=0x6de370, may_block=may_block@entry=1) at gmain.c:3207
#4  0x0000003e6da47ad1 in glib_worker_main (data=<optimized out>) at gmain.c:4879
#5  0x0000003e6da6a305 in g_thread_proxy (data=0x699630) at gthread.c:801
#6  0x0000003e6c207d14 in start_thread (arg=0x7ffff0e11700) at pthread_create.c:309
#7  0x0000003e6bef167d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 1 (Thread 0x7ffff7fb69c0 (LWP 6628)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:166
#1  0x0000003e6da8397f in g_cond_wait (cond=0x7fffd0349270, mutex=<optimized out>) at gthread-posix.c:746
#2  0x000000354d4865ae in camel_object_bag_reserve (bag=0x7fffe42ce880, key=key@entry=0x2e2af80) at camel-object-bag.c:328
#3  0x000000354d4a879c in camel_store_get_folder_sync (store=0x771560 [CamelMhStore], folder_name=0x2e2af80 "inbox/support", flags=flags@entry=(unknown: 0), cancellable=cancellable@entry=0x0, error=error@entry=0x0) at camel-store.c:1795
#4  0x00007fffeabe03aa in action_mail_message_new_cb (action=<optimized out>, shell_window=<optimized out>) at e-mail-shell-backend.c:215
#5  0x0000003e6e60f664 in g_closure_invoke (closure=0x161e6a0, return_value=return_value@entry=0x0, n_param_values=1, param_values=param_values@entry=0x7fffffffc400, invocation_hint=invocation_hint@entry=0x7fffffffc3a0) at gclosure.c:777
#6  0x0000003e6e6206d8 in signal_emit_unlocked_R (node=node@entry=0x1084ba0, detail=detail@entry=0, instance=instance@entry=0x1606bd0, emission_return=emission_return@entry=0x0, instance_and_params=instance_and_params@entry=0x7fffffffc400) at gsignal.c:3551
#7  0x0000003e6e62866d in g_signal_emit_valist (instance=0x1606bd0, signal_id=<optimized out>, detail=0, var_args=var_args@entry=0x7fffffffc648) at gsignal.c:3300
#8  0x0000003e6e6287c2 in g_signal_emit (instance=instance@entry=0x1606bd0, signal_id=<optimized out>, detail=detail@entry=0) at gsignal.c:3356
#9  0x00000036eaa95d43 in _gtk_action_emit_activate (action=0x1606bd0 [GtkAction]) at gtkaction.c:800
#10 0x0000003e6e60f943 in _g_closure_invoke_va (closure=closure@entry=0x6308b0, return_value=return_value@entry=0x0, instance=instance@entry=0x16303c0, args=args@entry=0x7fffffffca08, n_params=0, param_types=0x0) at gclosure.c:840
#11 0x0000003e6e627d88 in g_signal_emit_valist (instance=0x16303c0, signal_id=<optimized out>, detail=0, var_args=var_args@entry=0x7fffffffca08) at gsignal.c:3211
#12 0x0000003e6e6287c2 in g_signal_emit (instance=<optimized out>, signal_id=<optimized out>, detail=<optimized out>) at gsignal.c:3356
#13 0x0000003e6e60f943 in _g_closure_invoke_va (closure=closure@entry=0x111d270, return_value=return_value@entry=0x0, instance=instance@entry=0x771710, args=args@entry=0x7fffffffcdf8, n_params=0, param_types=0x0) at gclosure.c:840
#14 0x0000003e6e627d88 in g_signal_emit_valist (instance=instance@entry=0x771710, signal_id=signal_id@entry=313, detail=detail@entry=0, var_args=var_args@entry=0x7fffffffcdf8) at gsignal.c:3211
#15 0x0000003e6e628cd0 in g_signal_emit_by_name (instance=0x771710, detailed_signal=0x36ead2036a "clicked") at gsignal.c:3393
#16 0x0000003e6e60f943 in _g_closure_invoke_va (closure=closure@entry=0x1115730, return_value=return_value@entry=0x0, instance=instance@entry=0x786280, args=args@entry=0x7fffffffd1f8, n_params=0, param_types=0x0) at gclosure.c:840
#17 0x0000003e6e627d88 in g_signal_emit_valist (instance=0x786280, signal_id=<optimized out>, detail=0, var_args=var_args@entry=0x7fffffffd1f8) at gsignal.c:3211
#18 0x0000003e6e6287c2 in g_signal_emit (instance=<optimized out>, signal_id=<optimized out>, detail=<optimized out>) at gsignal.c:3356
#19 0x00000036eaabba28 in gtk_real_button_released (button=0x786280 [GtkButton]) at gtkbutton.c:2007
#20 0x0000003e6e60f943 in _g_closure_invoke_va (closure=closure@entry=0x77d530, return_value=return_value@entry=0x0, instance=instance@entry=0x786280, args=args@entry=0x7fffffffd5d8, n_params=0, param_types=0x0) at gclosure.c:840
#21 0x0000003e6e627d88 in g_signal_emit_valist (instance=0x786280, signal_id=<optimized out>, detail=0, var_args=var_args@entry=0x7fffffffd5d8) at gsignal.c:3211
#22 0x0000003e6e6287c2 in g_signal_emit (instance=<optimized out>, signal_id=<optimized out>, detail=detail@entry=0) at gsignal.c:3356
#23 0x00000036eaab9d43 in gtk_button_button_release (widget=<optimized out>, event=<optimized out>) at gtkbutton.c:1842
#24 gtk_button_button_release (widget=<optimized out>, event=<optimized out>) at gtkbutton.c:1834
#25 0x00000036eab7afdf in _gtk_marshal_BOOLEAN__BOXEDv (closure=0x627f40, return_value=0x7fffffffd820, instance=0x786280, args=<optimized out>, marshal_data=<optimized out>, n_params=<optimized out>, param_types=0x627f70) at gtkmarshalers.c:130
#26 0x0000003e6e60f943 in _g_closure_invoke_va (closure=closure@entry=0x627f40, return_value=return_value@entry=0x7fffffffd820, instance=instance@entry=0x786280, args=args@entry=0x7fffffffd9e8, n_params=1, param_types=0x627f70) at gclosure.c:840
#27 0x0000003e6e627d88 in g_signal_emit_valist (instance=0x786280, signal_id=<optimized out>, detail=0, var_args=var_args@entry=0x7fffffffd9e8) at gsignal.c:3211
#28 0x0000003e6e6287c2 in g_signal_emit (instance=instance@entry=0x786280, signal_id=<optimized out>, detail=detail@entry=0) at gsignal.c:3356
#29 0x00000036eaca311e in gtk_widget_event_internal (widget=widget@entry=0x786280 [GtkButton], event=event@entry=0x1c16660) at gtkwidget.c:6380
#30 0x00000036eaca3539 in gtk_widget_event (widget=widget@entry=0x786280 [GtkButton], event=event@entry=0x1c16660) at gtkwidget.c:6037
#31 0x00000036eab78fa6 in propagate_event_up (topmost=<optimized out>, event=<optimized out>, widget=0x786280 [GtkButton]) at gtkmain.c:2390
#32 propagate_event (widget=<optimized out>, event=0x1c16660, captured=<optimized out>, topmost=0x0) at gtkmain.c:2490
#33 0x00000036eab7abb3 in gtk_main_do_event (event=0x1c16660) at gtkmain.c:1713
#34 0x00000036ea648e82 in gdk_event_source_dispatch (source=source@entry=0x662780, callback=<optimized out>, user_data=<optimized out>) at gdkeventsource.c:358
#35 0x0000003e6da47695 in g_main_dispatch (context=0x65f540) at gmain.c:2539
#36 g_main_context_dispatch (context=context@entry=0x65f540) at gmain.c:3075
#37 0x0000003e6da479c8 in g_main_context_iterate (context=0x65f540, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3146
#38 0x0000003e6da47dc2 in g_main_loop_run (loop=0xa30350) at gmain.c:3340
#39 0x00000036eab79f75 in gtk_main () at gtkmain.c:1161
#40 0x0000000000403010 in main (argc=1, argv=0x7fffffffde88) at main.c:696
Comment 4 Matt Davey 2012-09-18 11:01:14 EDT
I should also have mentioned that I had a spontaneous exit from Evolution earlier, not sure why, so I may have corrupt folders.db again :( :( :(

I have several folder.db files, but the main one seems to open with sqlite3 and doesn't report errors with
   sqlite3 folders.db "PRAGMA integrity_check"
Comment 5 Milan Crha 2012-10-15 08:50:51 EDT
Thanks for the update. I was away for couple weeks, thus the response got delayed. I see the problem is with the "inbox/support" folder, when Thread 5 is initializing the folder, while the other threads are waiting until it's done. The problem is that also Thread 1 decided to get the folder, which made UI frozen. If I read the backtrace right, then it was when you was creating a new message. I guess it should recover eventually, when it reads content of the whole folder, which depends on its size. Could you check how many messages are in the folder, please?
Comment 6 Matt Davey 2012-11-26 09:21:12 EST
"support" is a high-volume list.  There are currently about 8,000 messages in the folder.
Comment 7 Milan Crha 2012-11-27 03:22:29 EST
Hrm, 8.000 is not that many, it should be pretty quick, especially with a local folder.
Comment 8 Matt Davey 2013-01-25 06:59:33 EST
This happened again, after a system crash.
I had to remove my folders.db file to get evolution to start.
The integrity check didn't reveal any problems.

Are there any further steps I could do to troubleshoot?  As you can imagine, it's really hurting me to lose all that metadata each time.  In a big folder I tend to leave mails unread, or marked as important, if I need to return to them.  I've now lost that information.
Comment 9 Milan Crha 2013-01-25 12:15:01 EST
I understand your disappointment, though my current problem is reproducibility. Once I'll be able to reproduce this consistently, I should be able to address the issue in the code. I guess you didn't save the broken folders.db file, did you? For example, sqlite3 can fix certain errors in the file, thus if the crash broke the file, then you would not lose all the information, but only part of it (the software cannot fix any brokeness in the .db file after a crash, especially if the file is in the middle of save to disk).
Comment 10 Matt Davey 2013-01-25 13:49:40 EST
thanks Milan.  I understand about the reproducibility problem.  I did in fact save the broken folders.db file.  it'll be Monday before I can send it.  I won't post it here, but I'll mail you a link to it if that's okay.
Comment 11 Milan Crha 2013-01-28 04:56:11 EST
OK, feel free to compress it and send it to me. Please name a bug at the Subject, otherwise I may overlook it in my junk folder. I'll be able to look on it and check with the sqlite3 tools whether the file is corrupt, but nothing more from it, I believe.
Comment 12 Milan Crha 2013-02-07 04:55:22 EST
According to an integrity check of the sqlite3 [1], the file itself is not broken, which is good, even it doesn't explain why the deadlock happened to you.

[1] $ sqlite3 folders.db.bak "PRAGMA integrity_check;"
    ok
Comment 13 Matt Davey 2013-05-08 05:02:57 EDT
Created attachment 745122 [details]
another backtrace of a lockup

Here's another example of a lockup.  This time Evolution claims to be opening two folders.  lsof shows two mh mail files open:

evolution 24830 matt   32r   REG              253,2     30665 1986965 /home/matt/Mail/work/pm/platform/1951
evolution 24830 matt   37r   REG              253,2     25111 1978960 /home/matt/Mail/work/pm/beta/2853

After doing "evolution --force-shutdown" evolution spins, saying "opening folder work/pm/beta".  lsof says it has this file open:
evolution 28821 matt   54r   REG              253,2    169465 2001331 /home/matt/Mail/work/pm/beta/2405

I can use the GUI, but can't view that folder, and it hangs on exit.

I'll attach a backtrace from the restart.

When I get these issues, my new standard operating procedure is to open the relevant folders.db and use sqlite3 to DROP all tables and indexes associated with the problem folder :(  Annoying, but *much* better than losing the lot.
Comment 14 Matt Davey 2013-05-08 05:04:30 EDT
Created attachment 745125 [details]
another gdb backtrace.

This backtrace was taken when Evolution was spinning trying to open the work/pm/beta folder after a force-shutdown from the lockup described in my previous comment.
Comment 15 Milan Crha 2013-05-13 12:06:11 EDT
Nice, I can finally reproduce it too, when I try to copy local folder to an mh folder. The copy thread is locked in camel_bag_object_reserve(), same as that your backtrace.
Comment 16 Milan Crha 2013-05-13 13:07:13 EDT
Created attachment 747308 [details]
eds patch

for evolution-data-server;

OK, so the MH backend tried to open a folder which it was currently creating, which caused a deadlock in CamelObjectBag, which was waiting for itself. I dropped the code, because it's not needed there.

What is your current Fedora version, still Fedora 17? I can build a package for you (as you seem to be the only one using MH provider) :) , because I didn't find any similar bug report upstream. I'm committing to git master and gnome-3-8 branches upstream.
Comment 17 Milan Crha 2013-05-13 13:13:16 EDT
Created commit 50fd40d in eds master (3.9.2+)
Created commit 25e799b in eds gnome-3-8 (3.8.3+)
Comment 18 Matt Davey 2013-05-15 07:34:50 EDT
Thanks Milan.

Yes, I'm still on fc17.

I'm pretty sure I wasn't creating any folder during these operations, but I guess the culprit code you found might have been involved anyway.
Comment 19 Fedora Update System 2013-05-15 13:28:22 EDT
evolution-data-server-3.4.4-5.fc17 has been submitted as an update for Fedora 17.
https://admin.fedoraproject.org/updates/evolution-data-server-3.4.4-5.fc17
Comment 20 Fedora Update System 2013-05-31 22:28:54 EDT
evolution-data-server-3.4.4-5.fc17 has been pushed to the Fedora 17 stable repository.  If problems still persist, please make note of it in this bug report.

Note You need to log in before you can comment on or make changes to this bug.