Bug 782002

Summary: Segmentation fault when using irssi for some time (server timeout/reconnect?)
Product: [Fedora] Fedora Reporter: Robert Scheck <redhat-bugzilla>
Component: irssiAssignee: Jaroslav Škarvada <jskarvad>
Status: CLOSED EOL QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 19CC: huzaifas, jskarvad, mmahut
Target Milestone: ---Flags: jskarvad: needinfo?
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-02-18 13:40:22 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
Backtrace of the irssi core dump none

Description Robert Scheck 2012-01-16 10:19:09 UTC
Description of problem:
After some time of irssi usage, I'm getting a segmentation fault. The crash
in this case could be related to the timeout of the Freenode IRC network, I
am connecting via IPv6 and I'm remembering to a lag of 200. The backtrace is
containing something related, but I'm not a developer through.

Version-Release number of selected component (if applicable):
irssi-0.8.15-2

How reproducible:
Happens about every 7-14 days, but I can not reproduce it directly.

Actual results:
Segmentation fault when using irssi for some time (server timeout/reconnect?)

Expected results:
No segmentation fault.

Additional info:
Backtrace is attached. Please let me know, if you need further information.

Comment 1 Robert Scheck 2012-01-16 10:20:05 UTC
Created attachment 555474 [details]
Backtrace of the irssi core dump

Comment 2 Jaroslav Škarvada 2012-01-16 15:02:20 UTC
The provided backtrace seems not to be usable here - the crash is very probably result of a memory corruption that happened somewhere earlier.

Could you try to run through valgrind and provide log? It will run much slower, but it will catch all memory corruption:

$ valgrind --track-origins=yes --log-file=log irssi

Comment 3 Robert Scheck 2012-01-16 15:08:12 UTC
Do I understand right, that you ask me to run irssi for 14 days with valgrind
as I can't really reproduce the issue, because it just happens from time to 
time...

Comment 4 Robert Scheck 2012-01-16 15:09:04 UTC
Aside, from IRCnet #irssi:

[13:46:52] <@Bazerka> at first glance, that doesn't seem to be an irssi bug
[13:48:21] <@Bazerka> the last place in irssi's code within that backtrace is a call to allocate memory for a struct
[13:48:32] <@Bazerka> and from there into glib and then glibc
[13:50:00] <@Bazerka> frame #5 is : server = g_new0(IRC_SERVER_REC, 1); 
[13:50:11] <@Bazerka> g_new0 being a glibc allocation function
[13:50:16] <@Bazerka> er, glib
[14:20:54] < rsc_> Bazerka: hm okay. So glib?
[14:27:36] <@Bazerka> either glib or (g)?libc
[14:27:59] <@Bazerka> g?libc even
[14:29:29] <@Bazerka> basically, g_new0 shouldn't die at all - it should either return a pointer on successful allocation or NULL if not
[14:31:25] < rsc_> Bazerka: okay

Comment 5 Jaroslav Škarvada 2012-01-16 15:38:30 UTC
(In reply to comment #4)
I think the problem is not in glib or glibc but somewhere in the irssi code. There is probably invalid write somewhere that overwrites internal malloc or glib structures causing this crash.

valgrind log would be helpful here at least for the reconnection sequence.

Comment 6 Jaroslav Škarvada 2012-01-16 16:04:41 UTC
I created experimental efenced test build that will perform similar checks as valgrind:
http://koji.fedoraproject.org/koji/taskinfo?taskID=3706173

Could you try it? It may run slightly slower, but it will crash on the first out of bound access. The backtrace would then reveal something more useful.

Comment 7 Fedora End Of Life 2013-04-03 19:26:35 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle.
Changing version to '19'.

(As we did not run this process for some time, it could affect also pre-Fedora 19 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.)

More information and reason for this action is here:
https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora19

Comment 8 Fedora End Of Life 2015-01-09 21:55:28 UTC
This message is a notice that Fedora 19 is now at end of life. Fedora 
has stopped maintaining and issuing updates for Fedora 19. It is 
Fedora's policy to close all bug reports from releases that are no 
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 19 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 9 Fedora End Of Life 2015-02-18 13:40:22 UTC
Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.