Bug 1472462

Summary: [abrt] ibus: _sighandler(): ibus-x11 killed by signal 11
Product: [Fedora] Fedora Reporter: Joachim Frieben <jfrieben>
Component: ibusAssignee: fujiwara <tfujiwar>
Status: CLOSED WORKSFORME QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 26CC: eggert, i18n-bugs, jfrieben, psatpute, shawn.p.huang, smaitra, tfujiwar
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
URL: https://retrace.fedoraproject.org/faf/reports/bthash/9a02535ba1f89127cfde1daa26016b4865056c13
Whiteboard: abrt_hash:1fb6e3f772723e2af75818a640182dc162e84ce9;VARIANT_ID=workstation;
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-03-22 06:21:18 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
File: backtrace
none
File: cgroup
none
File: core_backtrace
none
File: cpuinfo
none
File: dso_list
none
File: environ
none
File: exploitable
none
File: limits
none
File: maps
none
File: open_fds
none
File: proc_pid_status
none
File: var_log_messages none

Description Joachim Frieben 2017-07-18 19:43:25 UTC
Version-Release number of selected component:
ibus-1.5.16-3.fc26

Additional info:
reporter:       libreport-2.9.1
backtrace_rating: 4
cmdline:        /usr/libexec/ibus-x11 --kill-daemon
crash_function: _sighandler
executable:     /usr/libexec/ibus-x11
journald_cursor: s=8e5363300b49437eb544c83cfc96b147;i=3dc12;b=316fea74f0934d70846642138ecfca2f;m=12eb301ac;t=5549860593af9;x=75d79439fd06b727
kernel:         4.11.10-300.fc26.x86_64
rootdir:        /
runlevel:       N 5
type:           CCpp
uid:            1000

Comment 1 Joachim Frieben 2017-07-18 19:43:33 UTC
Created attachment 1300661 [details]
File: backtrace

Comment 2 Joachim Frieben 2017-07-18 19:43:34 UTC
Created attachment 1300662 [details]
File: cgroup

Comment 3 Joachim Frieben 2017-07-18 19:43:37 UTC
Created attachment 1300663 [details]
File: core_backtrace

Comment 4 Joachim Frieben 2017-07-18 19:43:38 UTC
Created attachment 1300664 [details]
File: cpuinfo

Comment 5 Joachim Frieben 2017-07-18 19:43:40 UTC
Created attachment 1300665 [details]
File: dso_list

Comment 6 Joachim Frieben 2017-07-18 19:43:41 UTC
Created attachment 1300666 [details]
File: environ

Comment 7 Joachim Frieben 2017-07-18 19:43:42 UTC
Created attachment 1300667 [details]
File: exploitable

Comment 8 Joachim Frieben 2017-07-18 19:43:44 UTC
Created attachment 1300668 [details]
File: limits

Comment 9 Joachim Frieben 2017-07-18 19:43:47 UTC
Created attachment 1300669 [details]
File: maps

Comment 10 Joachim Frieben 2017-07-18 19:43:48 UTC
Created attachment 1300670 [details]
File: open_fds

Comment 11 Joachim Frieben 2017-07-18 19:43:49 UTC
Created attachment 1300671 [details]
File: proc_pid_status

Comment 12 Joachim Frieben 2017-07-18 19:43:51 UTC
Created attachment 1300672 [details]
File: var_log_messages

Comment 13 Paul Eggert 2017-07-19 17:32:54 UTC
*** Bug 1472977 has been marked as a duplicate of this bug. ***

Comment 14 fujiwara 2017-07-20 02:32:51 UTC
I cannot reproduce your backtrace.
Are you still able to reproduce your backtrace?

Your ibus-x11 failed in calling exit(EXIT_FAILURE).
Googling that failure of __run_exit_handlers(), seems a buffer overflow might happen so the backtrace itself does not help to resolve your problem and need to know the reproducing steps exactly to reproduce your problem.


(In reply to Joachim Frieben from comment #1)
> Created attachment 1300661 [details]
> File: backtrace

#0  __run_exit_handlers (status=1, listp=0x7f0c1494d5b8 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true, run_dtors=run_dtors@entry=true) at exit.c:55
#1  0x00007f0c145bcc8a in __GI_exit (status=<optimized out>) at exit.c:105
#2  0x000055c6c23cad2e in _sighandler (sig=<optimized out>) at main.c:1108
#3  <signal handler called>
#4  0x00007f0c14686a9d in poll () at ../sysdeps/unix/syscall-template.S:84
#5  0x00007f0c14bbc569 in g_main_context_poll (priority=<optimized out>, n_fds=1, fds=0x7f0bf80010c0, timeout=<optimized out>, context=0x55c6c37e1110) at gmain.c:4271
#6  g_main_context_iterate (context=0x55c6c37e1110, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3967
#7  0x00007f0c14bbc902 in g_main_loop_run (loop=0x55c6c37e2ab0) at gmain.c:4168
#8  0x00007f0c151a1cb6 in gdbus_shared_thread_func (user_data=0x55c6c37e10e0) at gdbusprivate.c:252
#9  0x00007f0c14be3536 in g_thread_proxy (data=0x55c6c372d000) at gthread.c:784
#10 0x00007f0c1495a36d in start_thread (arg=0x7f0bfe139700) at #11 0x00007f0c14692b8f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97

Comment 15 Joachim Frieben 2017-07-20 06:14:50 UTC
(In reply to fujiwara from comment #14)
I posted the backtrace after noticing the corresponding alert in the "Problem Reporting" utility. I have no memory of the precise circumstances of this crash. Nevertheless, the issue is current and real.

Comment 16 fujiwara 2017-07-20 06:17:00 UTC
So are you still reproduce your backtrace?

Comment 17 Joachim Frieben 2017-07-20 07:37:23 UTC
(In reply to fujiwara from comment #16)
I do not know the precise circumstances which triggered this crash so I am unable to reproduce it deliberately.

Comment 18 fujiwara 2017-07-20 08:24:33 UTC
How many times can you reproduce your backtrace per day?
I mean if you can reproduce your problem frequently, I will ask some debug tests to try to get the root cause. But if you cannot, I have no way to investigate your problem and also think this is not so important issue.

Comment 19 Joachim Frieben 2017-07-20 08:41:24 UTC
(In reply to fujiwara from comment #18)
This crash occurred only two days ago, and I have not seen it again yet. I would therefore say: "wait and see". If it was a random crash, this bug report will be closed anyway when Fedora 26 reaches end of life, thanks.

Comment 20 Pravin Satpute 2018-03-22 06:21:18 UTC
We are working on F26 bug triaging activity, basically to make sure important bugs will not get closed automatically and those might get reproduced in next release.

From above comments it looks like we are not able to reproduced this issue. Closing it as of now, if you still feel this is an issue, feel free to reopen. 

Thanks.