Bug 847291 - abrt gets stuck if the "worst dir" is not a "problem directory"
Summary: abrt gets stuck if the "worst dir" is not a "problem directory"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libreport
Version: 6.3
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: abrt
QA Contact: Miroslav Hradílek
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-10 13:08 UTC by Göran Uddeborg
Modified: 2019-08-15 03:35 UTC (History)
7 users (show)

Fixed In Version: libreport-2.0.9-12.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-21 07:54:12 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0290 normal SHIPPED_LIVE abrt, libreport and btparser bug fix and enhancement update 2013-02-20 20:55:47 UTC

Description Göran Uddeborg 2012-08-10 13:08:26 UTC
Description of problem:
Abrtd started to fill /var/log/messages quickly.  At the same time, python scripts we were running sometimes hang indefinitely.

I took a closer look, and this is what I believe is happening.  When abrtd decides /var/spool/abrt is too full, it will try to remove the "worst" directory in there.  This happens in handle_inotify_cb in abrt-2.0.8/src/daemon/abrtd.c  It uses get_dirsize_find_largest_dir() to find the "worst" directory, and then delete_dump_dir() to remove it.  If, however, the directory found does not contain a "time" file; is not considered a "problem directory", then it will not be removed.  But since nothing was removed, abrtd still considers the spool directory too full, and tries again in the while loop.  Of course, it will again find the same "worst" directory.  And it will keep looping like that.

Version-Release number of selected component (if applicable):
abrt-2.0.8-6.el6.x86_64
libreport-2.0.9-5.el6.x86_64

How reproducible:
Every time.

Steps to Reproduce:
1. Create a directory with a lot of large files in /var/log/abrt.  Make sure none of them is called "time".
  
Actual results:
Abrtd keeps sending these kinds of messages:

Aug 10 14:10:26 kisaumi abrtd: Size of '/var/spool/abrt' >= 1000 MB, deleting 'ccpp-2012-07-04-11:34:26-27117'
Aug 10 14:10:27 kisaumi abrtd: '/var/spool/abrt/ccpp-2012-07-04-11:34:26-27117' is not a problem directory
Aug 10 14:10:27 kisaumi abrtd: Size of '/var/spool/abrt' >= 1000 MB, deleting 'ccpp-2012-07-04-11:34:26-27117'
Aug 10 14:10:27 kisaumi abrtd: '/var/spool/abrt/ccpp-2012-07-04-11:34:26-27117' is not a problem directory
Aug 10 14:10:27 kisaumi abrtd: Size of '/var/spool/abrt' >= 1000 MB, deleting 'ccpp-2012-07-04-11:34:26-27117'
Aug 10 14:10:28 kisaumi abrtd: '/var/spool/abrt/ccpp-2012-07-04-11:34:26-27117' is not a problem directory
Aug 10 14:10:28 kisaumi abrtd: Size of '/var/spool/abrt' >= 1000 MB, deleting 'ccpp-2012-07-04-11:34:26-27117'
Aug 10 14:10:28 kisaumi abrtd: '/var/spool/abrt/ccpp-2012-07-04-11:34:26-27117' is not a problem directory
Aug 10 14:10:28 kisaumi abrtd: Size of '/var/spool/abrt' >= 1000 MB, deleting 'ccpp-2012-07-04-11:34:26-27117'
Aug 10 14:10:28 kisaumi abrtd: '/var/spool/abrt/ccpp-2012-07-04-11:34:26-27117' is not a problem directory

It also becomes unresponsive.  If a Python process will crash with an uncaught exception, it will hang waiting forever on a connection to /var/run/abrt/abrt.socket

Additional information:
It could be argued if this is a libreport or abrt problem.  It seems to me that get_dirsize_find_largest_dir() ought to be consistent with delete_dump_dir(), and both of these belong to libreport, so I choose that component.  But maybe the functions aren't specified in that way?

Comment 2 Denys Vlasenko 2012-08-13 12:51:18 UTC
Fixed in git:

commit 3f74a39f2bee037ee81ac2937b85f2141b3a83e5
Author: Denys Vlasenko <vda.linux@googlemail.com>
Date:   Mon Aug 13 14:50:29 2012 +0200

    abrtd: make it ignore non-problem dirs when looking for a dir to delete

Comment 4 Denys Vlasenko 2012-12-20 09:36:44 UTC
A related upstream commit:

commit 4d7cf343acae679cb849007150428f7becce2210
Author: Denys Vlasenko <vda.linux@googlemail.com>
Date:   Thu Dec 20 10:34:29 2012 +0100

    Make get_dirsize_find_largest_dir less talkative

Comment 6 Denys Vlasenko 2012-12-20 15:17:34 UTC
Correction (different git commit ID):

commit 8391c228db600525d515e94f66d7c73d47c663a6
Author: Denys Vlasenko <vda.linux@googlemail.com>
Date:   Thu Dec 20 15:03:29 2012 +0100

    Make get_dirsize_find_largest_dir less talkative

Comment 9 Jiri Moskovcak 2013-01-24 15:23:30 UTC
Cause
   Directory not created by ABRT was in /var/spool/abrt
Consequence
   ABRT got stuck in infinite loop when reaching it's hard disk space quota.
Fix
   Ignore unknown directories when counting freespace in /var/spool/abrt
Result
   abrtd not caught in the infinite loop

Comment 10 errata-xmlrpc 2013-02-21 07:54:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0290.html


Note You need to log in before you can comment on or make changes to this bug.