Bug 1164548

Summary: -fsanitize=address locks up abrt-hook-ccpp
Product: [Fedora] Fedora Reporter: Jan Kratochvil <jan.kratochvil>
Component: abrtAssignee: abrt <abrt-devel-list>
Status: CLOSED EOL QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 25CC: abrt-devel-list, dvlasenk, iprikryl, jan.kratochvil, jberan, lankyleggy, mhabrnal, mmilata, rvokal
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-12 10:13:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jan Kratochvil 2014-11-16 09:43:15 UTC
Description of problem:
Tried to enable ABRT after some years but I have to disable it as it locks up any program under development which crashes.

Version-Release number of selected component (if applicable):
abrt-addon-ccpp-2.2.2-1.fc20.x86_64

How reproducible:
Always.

Steps to Reproduce:
echo -e '#include<stdlib.h>\nmain(){abort();}'|gcc -fsanitize=address -x c -;./a.out 

Actual results:
  PID USER     PR  NI    VIRT  RES  SHR S  %CPU %MEM     TIME+ COMMAND
 6601 root     20   0  108428 8096 6516 R  99.1  0.1  13:50.46 abrt-hook-ccpp
 6600 jkratoch 20   0 16.875t 4976 1856 R  55.5  0.1   7:26.25 a.out
+
lseek(3, 4096, SEEK_CUR)                = 761593884672
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096
lseek(3, 4096, SEEK_CUR)                = 761593888768
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096
[...]
Estimated run time of abrt-hook-ccpp is 5 hours.
After kill 6600:
Nov 16 10:39:03 host2 abrt-server: Executable '/home/jkratoch/.../a.out' doesn't belong to any package and ProcessUnpackaged is set to 'no'

Expected results:
Immediate exit of abrt-hook-ccpp due to ProcessUnpackaged without needless reading in of the core file.
Additionally even in the cases it would try to deal with the core file it can check first that 16TB core file is really not worth reading in.
Additionally Linux kernel core dumper could skip all the zeroed unallocated pages using some special core dumping protocol.

Additional info:

Comment 1 Jakub Filak 2014-11-18 07:46:30 UTC
Thank you for taking the time to file this bug report!

> Expected results:
> Immediate exit of abrt-hook-ccpp due to ProcessUnpackaged without needless
> reading in of the core file.

ABRT checks whether a binary belongs to a package by querying the rpm database and this action took too much time to run to be performed in the coredumper, because there are users who care about the recovery time of the crashed binary[1]. ABRT expects that writing a core file is many times faster than querying the rpm database. Perhaps the time has changed and ABRT should query the rpm database before writing a core, or it could be a configuration option.

> Additionally even in the cases it would try to deal with the core file it
> can check first that 16TB core file is really not worth reading in.

Is there any way how can ABRT check that? Kernel writes the core to the coredumper through STDIN and I'm not aware of any technique for getting size of incoming data different than reading them all.

> Additionally Linux kernel core dumper could skip all the zeroed unallocated
> pages using some special core dumping protocol.

ABRT tries to deal with sparse core files on its own [2], but unfortunately ABRT must read entire STDIN. Could you please point me to some documentation about this?


1: https://github.com/abrt/abrt/issues/872
2: https://github.com/abrt/abrt/blob/master/src/hooks/abrt-hook-ccpp.c#L52

Comment 2 Jan Kratochvil 2014-11-18 08:25:31 UTC
(In reply to Jakub Filak from comment #1)
> Perhaps the time has changed and ABRT should query the rpm database before
> writing a core, or it could be a configuration option.

It could also read say 16MB of the core file and only then it could start checking rpm database.


> > Additionally even in the cases it would try to deal with the core file it
> > can check first that 16TB core file is really not worth reading in.
> 
> Is there any way how can ABRT check that? Kernel writes the core to the
> coredumper through STDIN and I'm not aware of any technique for getting size
> of incoming data different than reading them all.

There are available /proc/PID/{,s}maps or some /proc/PID/stat* files to abort the process before reading the whole core file.

Additionally it could also parse the core file ELF core file program headers to see how long the core file will be - the headers are at the beginning of the core file.


> > Additionally Linux kernel core dumper could skip all the zeroed unallocated
> > pages using some special core dumping protocol.
> 
> ABRT tries to deal with sparse core files on its own [2], but unfortunately
> ABRT must read entire STDIN. Could you please point me to some documentation
> about this?

This is not currently implemented in Linux kernel and it was just such an idea.  Kernel knows which ranges are not allocated and it already writes on-disk core file spare in such case.  It could pass this unallocated range by some way into the core_pattern handler.

This way ABRT could even really handle the -fsanitize=address core files.  But I do not think this item should really be implemented but sure up to you.

Comment 3 Fedora End Of Life 2015-05-29 13:18:45 UTC
This message is a reminder that Fedora 20 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 20. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '20'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 20 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 4 Jan Kurik 2015-07-15 14:36:34 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 23 development cycle.
Changing version to '23'.

(As we did not run this process for some time, it could affect also pre-Fedora 23 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 23 End Of Life. Thank you.)

More information and reason for this action is here:
https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora23

Comment 5 Fedora End Of Life 2016-11-24 11:17:23 UTC
This message is a reminder that Fedora 23 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 23. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '23'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 23 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 6 Fedora End Of Life 2017-11-16 18:41:32 UTC
This message is a reminder that Fedora 25 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 25. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '25'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 25 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Comment 7 Fedora End Of Life 2017-12-12 10:13:36 UTC
Fedora 25 changed to end-of-life (EOL) status on 2017-12-12. Fedora 25 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.