From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.3) Gecko/20040929 Description of problem: Booting a machine with it's primary inteface set to ONBOOT=no so that no eth0 is present. Running evolution causes an immediate sefgault. This greatly inconveniencing travelling users who bring up ethernet connections manually. Version-Release number of selected component (if applicable): evolution-1.4.5-9, also evolution from QU3 How reproducible: Always Steps to Reproduce: 1. ifdown eth0 2. evolution 3. watch prang occur Actual Results: segfault, evolution crashes Expected Results: evolution should run in offline mode? Additional info: evolution DEBUG=foo provides no traceback, foo is never written. gdb output: [rkearey@custard junk]$ gdb evolution GNU gdb Red Hat Linux (6.1post-1.20040607.17rh) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-redhat-linux-gnu"...Using host libthread_db library "/lib/tls/libthread_db.so.1". (gdb) run Starting program: /usr/bin/evolution [Thread debugging using libthread_db enabled] [New Thread -1218571616 (LWP 25455)] (evolution:25455): evolution-shell-WARNING **: Error setting owner on component OAFIID:GNOME_Evolution_ExchangeStorage_ShellComponent -- Old owner has died Waiting for component to die -- OAFIID:GNOME_Evolution_ExchangeStorage_ShellComponent (1) [New Thread -1224574032 (LWP 25460)] [New Thread -1236272208 (LWP 25461)] [New Thread -1247179856 (LWP 25462)] [New Thread -1257677904 (LWP 25463)] cal-client-Message: cal_client_uri_list(): request failed [New Thread -1268175952 (LWP 25464)] [Thread -1268175952 (LWP 25464) exited] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1247179856 (LWP 25462)] 0x4929c527 in socket_connect (h=0x812cda4, port=993) at camel-tcp-stream-openssl.c:491 491 memcpy (&sin.sin_addr, h->h_addr, sizeof (sin.sin_addr)); (gdb) bt #0 0x4929c527 in socket_connect (h=0x812cda4, port=993) at camel-tcp-stream-openssl.c:491 #1 0x4929d06d in stream_connect (stream=0x0, host=0x812cda4, port=0) at camel-tcp-stream-openssl.c:773 #2 0x4929d5ef in camel_tcp_stream_connect (stream=0xb651f8e0, host=0x0, port=0) at camel-tcp-stream.c:108 #3 0xb5af4648 in connect_to_server (service=0x8197730, ssl_mode=1, try_starttls=0, ex=0xb6521870) at camel-imap-store.c:582 #4 0xb5af4b9b in connect_to_server_wrapper (service=0x8197730, ex=0xb6521870) at camel-imap-store.c:759 #5 0xb5af5924 in imap_connect_online (service=0x8197730, ex=0xb6521870) at camel-imap-store.c:1182 #6 0x49252a6f in disco_connect (service=0x8197730, ex=0xb6521870) at camel-disco-store.c:155 #7 0x4928dffd in camel_service_connect (service=0x8197730, ex=0x0) at camel-service.c:386 #8 0x492531cc in set_status (disco_store=0x8197730, status=CAMEL_DISCO_STORE_ONLINE, ex=0xb6521870) at camel-disco-store.c:308 #9 0x4925322d in camel_disco_store_set_status (store=0x8197730, status=CAMEL_DISCO_STORE_ONLINE, ex=0x0) at camel-disco-store.c:325 #10 0xb727d894 in set_offline_do (mm=0xb6521858) at mail-ops.c:2267 #11 0xb7277c6e in mail_msg_received (e=0x8150cc0, msg=0xb6521858, data=0x0) at mail-mt.c:503 #12 0x491e1827 in thread_received_msg (e=0x8150cc0, m=0x0) at e-msgport.c:617 #13 0x491e1957 in thread_dispatch (din=0x8150cc0) at e-msgport.c:698 #14 0x419d3dec in start_thread () from /lib/tls/libpthread.so.0 #15 0x4182419a in clone () from /lib/tls/libc.so.6
(FWIW I couldn't reproduce the bug with evolution 2.0)
I can reliably reproduce with 1.4.5, as is used by all of our sales and admin here. evolution --offline seems to work around the problem, but that's not how evo is launched by gnome.
Starting evo in with no network with --offline, then selecting 'work online' from the file menu also crashes evo.
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.