I am only using the Oscar protocol, with the following plugins enabled:
Auto-Reconnect, History, Message Notification, System Tray Icon
GAIM appears to start up and function normally, but I am seeing the
following kernel error hit syslog/dmesg on the order of 2 or more
times per hour for my numa system, less frequently on the SMP and UP
non numa x86_64:
gaim trap divide error rip:3efd442a70 rsp:7fbfff7e60 error:0
This is followed by a core file in the home directory but the app
keeps functioning and the user would not be aware if they never saw
the cores/logs/filesystem fill up. An example core file is available
Dont hesitate to ping me for any other information needed via this
bug, email, or on freenode as Fedora64
<warren> arjan, "2 or more times per hour for my numa system, less
frequently on the SMP and UP non numa x86_64"
<arjan> division by zero
<warren> why does it dump core but not crash the program?
<arjan> addr2line it
<arjan> maybe the app catches the segv
<warren> arjan, any reason it would happen more often on "numa"?
<arjan> warren: race in threads show up more in numa
<arjan> warren: bigger memory timings -> bigger race window
Justin: Is it possible for you to get a backtrace from the core file?
The instructions at http://gaim.sourceforge.net/gdb.php might be
helpful. Also, the output from "file gaim.core" might be helpful.
I don't think Gaim is catching segv, but I suppose it's possible the
DNS lookup threads are crashing.
gaim.core: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV),
SVR4-style, SVR4-style, SVR4-style, SVR4-style, SVR4-style
Of possible interest, I turned off the history plugin (usually never
turn it on, not sure why it was) and I am getting the cores less
frequenlty, only one every few hours now, might have been because it
was 3AM until 8AM though. Also, this is happening with no AIM
conversations going on .
#0 0x0000003efd442a70 in snd_pcm_sw_params () from
#1 0x0000002a98225efa in ao_plugin_device_clear ()
#2 0x0000002a98225936 in ao_plugin_open () from
#3 0x0000003ef8e02142 in ?? () from /usr/lib64/libao.so.2
#4 0x000000000049eaec in gaim_gtk_sound_play_file (filename=0x812840
#5 0x000000000044ec8a in gaim_sound_play_file (
filename=0xa0b190 "/usr/share/sounds/gaim/leave.wav") at sound.c:67
#6 0x000000000049ecb2 in gaim_gtk_sound_play_event
#7 0x000000000044ecc8 in gaim_sound_play_event
#8 0x000000000044b7ec in serv_got_update (gc=0x959ab0, name=0xa1d560
loggedin=0, evil=0, signon=0, idle=0, type=0) at server.c:1261
#9 0x0000002a9a9a44ef in gaim_parse_offgoing (sess=0x0, fr=0x2b11) at
#10 0x0000002a9a98c1f1 in buddychange (sess=0x96e670, mod=0x2b11,
snac=0x7fbfff9320, bs=0x2b11) at buddylist.c:253
#11 0x0000002a9a9989f3 in consumesnac (sess=0x96e670, rx=0xc269c0) at
#12 0x0000002a9a9990a4 in aim_rxdispatch (sess=0x96e670) at
#13 0x0000002a9a9a0fcb in oscar_callback (data=0x0, source=11025,
condition=GAIM_INPUT_READ) at oscar.c:767
#14 0x000000000047ef2c in gaim_gtk_io_invoke (source=0x0,
#15 0x0000003ef90456db in g_vasprintf () from /usr/lib64/libglib-2.0.so.0
#16 0x0000003ef902495a in g_main_depth () from /usr/lib64/libglib-2.0.so.0
#17 0x0000003ef9025974 in g_main_context_dispatch () from
#18 0x0000003ef9025c5e in g_main_context_dispatch () from
#19 0x0000003ef902620d in g_main_loop_run () from
#20 0x0000003efb808451 in gtk_main () from /usr/lib64/libgtk-x11-2.0.so.0
#21 0x00000000004a43fc in main (argc=1, argv=0x7fbffff858) at main.c:911
Doh, do not know why it just hit me. This might be related to bug
119611 which is actually an alsa issue. Going to install the
recommended patch there and see if it fixes things.
Also of note, I had the sound settings on default on the Numa box, so
it was trying to play sounds much more often, as opposed to the UP box
where I only play sound when message received begins conversation.
the backtrace is certainly in the libao code, i'd be interested to see
if it still happened, or if it changed, if you disabled sounds entirely.
After turning off sounds for the past 24 hours, no more issues with
gaim. This is definately an alsa-lib issue only. Going to close this
one as a duplicate of bug 119611
*** This bug has been marked as a duplicate of 119611 ***