Bug 657622 - [abrt] gnome-panel-2.30.2-5.el6: Process /usr/libexec/clock-applet was killed by signal 11 (SIGSEGV)
Summary: [abrt] gnome-panel-2.30.2-5.el6: Process /usr/libexec/clock-applet was killed...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libsoup
Version: 6.0
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: rc
: ---
Assignee: Dan Winship
QA Contact: Desktop QE
URL:
Whiteboard:
: 704349 (view as bug list)
Depends On:
Blocks: 662543 747123 782183 727267 756082 840699
TreeView+ depends on / blocked
 
Reported: 2010-11-26 20:40 UTC by Yogesh
Modified: 2018-12-01 19:02 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-21 08:24:26 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0313 normal SHIPPED_LIVE libsoup bug fix update 2013-02-20 20:35:08 UTC
Red Hat Knowledge Base (Legacy) 43220 None None None Never
GNOME Bugzilla 618641 None None None 2012-07-12 17:36:55 UTC

Description Yogesh 2010-11-26 20:40:08 UTC
Description of problem:
SEGFAULT in gnome clock-applet

Version-Release number of selected component (if applicable):
libsoup-2.28.2-1.el6.x86_64

How reproducible:
random

Actual results:
clock-applet gets SEGFAULT

Expected results:
clock-applet should run with no problem


Additional info:
similar to frequent bug on F12 and F13
https://bugzilla.redhat.com/show_bug.cgi?id=590834

GDB trace for program....
(gdb) bt full
#0  do_idle_run_queue (session=0x2679680) at soup-session-async.c:408
        priv = <value optimized out>
#1  0x00007fa252c5d50c in socket_connect_result (sock=0x26bdf10, status=200, user_data=0x2191b00) at soup-connection.c:389
        data = 0x2191b00
        priv = 0x2695940
#2  0x00007fa252c779ab in idle_connect_result (user_data=0x205a040) at soup-socket.c:594
        sacd = 0x205a040
        priv = <value optimized out>
        status = <value optimized out>
#3  0x00007fa252c77b69 in connect_watch (iochannel=<value optimized out>, condition=<value optimized out>, data=0x205a040) at soup-socket.c:621
        sacd = 0x205a040
        priv = 0x26bdf30
        error = 0
        len = 4
#4  0x00007fa25473bf0e in g_main_dispatch (context=0x1fef8c0) at gmain.c:1960
        dispatch = 0x7fa2547712f0 <g_io_unix_dispatch>
        was_in_call = 0
        user_data = 0x205a040
        callback = 0x7fa252c77ad0 <connect_watch>
        cb_funcs = 0x7fa2549e7970
        cb_data = 0x27ebd30
        current_source_link = {data = 0x2130e00, next = 0x0}
        need_destroy = <value optimized out>
        source = 0x2130e00
        current = 0x2043610
        i = <value optimized out>
#5  IA__g_main_context_dispatch (context=0x1fef8c0) at gmain.c:2513
No locals.
#6  0x00007fa25473f938 in g_main_context_iterate (context=0x1fef8c0, block=1, dispatch=1, self=<value optimized out>) at gmain.c:2591
        max_priority = 2147483647
        timeout = 419
        some_ready = 1
        nfds = 15
        allocated_nfds = <value optimized out>
        fds = <value optimized out>
        __PRETTY_FUNCTION__ = "g_main_context_iterate"
#7  0x00007fa25473fd55 in IA__g_main_loop_run (loop=0x20412f0) at gmain.c:2799
        self = 0x1fcc130
        __PRETTY_FUNCTION__ = "IA__g_main_loop_run"
#8  0x00007fa255360106 in bonobo_main () at bonobo-main.c:311
        loop = 0x20412f0
#9  0x00007fa25535e4e2 in bonobo_generic_factory_main_timeout (act_iid=<value optimized out>, factory_cb=<value optimized out>, user_data=<value optimized out>, 
    quit_timeout=<value optimized out>) at bonobo-generic-factory.c:411
        context = 0x20306a0
        signal = 11
        factory = 0x2030760
#10 0x00007fa2579dc801 in panel_applet_factory_main_closure (iid=<value optimized out>, applet_type=33820144, closure=0x2040f80) at panel-applet.c:1774
        retval = <value optimized out>
        display_iid = 0x2040fd0 ":0.0,OAFIID:GNOME_ClockApplet_Factory"
        data = 0x2040fb0
        __PRETTY_FUNCTION__ = "panel_applet_factory_main_closure"
#11 0x00000000004119fd in main (argc=1, argv=0x7ffff109a798) at clock.c:3761
        context = 0x1fba0d0
        error = 0x0
        retval = <value optimized out>




SOURCE CODE
static void
do_idle_run_queue (SoupSession *session)
{
        SoupSessionAsyncPrivate *priv = SOUP_SESSION_ASYNC_GET_PRIVATE (session);

        if (!priv->idle_run_queue_source) {
                priv->idle_run_queue_source = soup_add_completion (
                        soup_session_get_async_context (session),
                        idle_run_queue, session);
        }
}




NULL Pointer reference. Value of %rax remains 0x0 after g_type_instance_get_private called for session. (similar to bugzilla attachment https://bugzilla.redhat.com/attachment.cgi?id=412935 )
We need to check if variable 'priv' of type SoupSessionAsyncPrivate structure is NULL before accessing it's element. Does not look like fixed in upstream.
=> rax            0x0	0
rbx            0x12e9260	19829344
rcx            0x7f260617b940	139801287702848
rdx            0x0	0
rsi            0x0	0
rdi            0x2	2
rbp            0x1276840	0x1276840
rsp            0x7fff53fd7860	0x7fff53fd7860
r8             0x1301690	19928720
r9             0x1	1
r10            0x1	1
r11            0x0	0
r12            0xc8	200
r13            0x129fcd0	19528912
r14            0x7f2602d98d30	139801233296688
r15            0xcff100	13627648
rip            0x7f260100d731	0x7f260100d731 <do_idle_run_queue+33>
eflags         0x10206	[ PF IF RF ]
Dump of assembler code for function do_idle_run_queue:
   0x00007f260100d710 <+0>:	mov    %rbx,-0x10(%rsp)
   0x00007f260100d715 <+5>:	mov    %rbp,-0x8(%rsp)
   0x00007f260100d71a <+10>:	sub    $0x18,%rsp
   0x00007f260100d71e <+14>:	mov    %rdi,%rbp
   0x00007f260100d721 <+17>:	callq  0x7f2600fe76b8 <soup_session_async_get_type@plt>
   0x00007f260100d726 <+22>:	mov    %rbp,%rdi
   0x00007f260100d729 <+25>:	mov    %rax,%rsi
   0x00007f260100d72c <+28>:	callq  0x7f2600fe8de8 <g_type_instance_get_private@plt>
=> 0x00007f260100d731 <+33>:	cmpq   $0x0,(%rax)

Comment 1 Yogesh 2010-11-26 20:42:22 UTC
Created attachment 463139 [details]
core dump

Comment 2 Yogesh 2010-11-26 20:43:13 UTC
Created attachment 463140 [details]
sosreport

Comment 4 Suzanne Yeghiayan 2011-01-05 19:47:17 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated 
in the current release, Red Hat is unfortunately unable to 
address this request at this time.  This request has been 
proposed for the next release of Red Hat Enterprise Linux.
If you would like it considered as an exception in the 
current release, please ask your support representative.

Comment 6 RHEL Product and Program Management 2011-07-06 01:26:13 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.

Comment 9 Ray Strode [halfline] 2011-12-19 23:30:55 UTC
I spent a little time today trying to figure out what's going on here.

This doesn't look like a problem in the clock code but a problem in libsoup.

I believe the trace in comment 0 is missing a frame between frame 0 and 1 thanks to compiler optimizations.  I haven't tried yet to reproduce, but from code inspection that frame is probably something like:

#0.5 got_connection (conn=0x26bdf10, status=200, session=0x2679680):299

which is the last line in the function and it calls do_idle_run_queue.

If that guess is right, then the only way we could be here is from this code in run_queue:
...
     if (soup_connection_get_state (conn) == SOUP_CONNECTION_NEW) {   
         soup_connection_connect_async (conn, got_connection, session);
...

connect_async doesn't call got_connection until a GSource called watch_src setup by the soup_socket_connect_async function is ready.  That source may be an io channel waiting for the socket being used to be ready or an idle source if the socket is already ready.

If priv is NULL when do_idle_run_queue is called, as indicated in this bug report, then that means that session is probably freed memory.  One possible way that session could be freed is from the weather_info_free call in libgweather, which unrefs the session object.

Before doing this it calls soup_session_abort, though, which calls soup_session_cancel_message on whatever pending message lead to soup_connection_connect_async getting called. cancel_message makes sure some input sources are removed, but the watch_src source mentioned earlier isn't one of them.  The async_cancel function seems to do the right thing, but it's only called by cancellables objects, and soup_connection_connect_async passes a NULL cancellable object to soup_socket_connect_async.

Reassigning to libsoup.

Comment 10 Dan Winship 2012-06-27 17:51:42 UTC
Fixed upstream in libsoup 2.32; the patch (http://git.gnome.org/browse/libsoup/commit/?id=a87e5833) applies more-or-less cleanly and doesn't break any of the existing libsoup tests; the additional test added by that patch can't be added to RHEL 6's libsoup because it depends on features that were added later.

Comment 11 RHEL Product and Program Management 2012-07-10 07:56:16 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 12 RHEL Product and Program Management 2012-07-10 23:50:30 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 15 Ray Strode [halfline] 2012-08-09 17:37:58 UTC
*** Bug 704349 has been marked as a duplicate of this bug. ***

Comment 20 errata-xmlrpc 2013-02-21 08:24:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0313.html


Note You need to log in before you can comment on or make changes to this bug.