While trying to create a report for a yumex crash here: https://bugzilla.redhat.com/show_bug.cgi?id=548827 The gdb process eats most of my systems 2G or ram, and the system is swapping to death - I thought our kernel was supposed to handle OOM more gracefully these days, but after watching my system struggle on for 15 minutes, I had to put it out of its misery. A crash collection tool should not do that to my system. I don't care how long it takes to put the report together, but I do care about my system remaining usable while doing so.
I experienced the same thing regarding the same yumex crash. I ran top while trying to resend the abrt and found that Python was consuming a steady 90-95% of the CPU and kept consuming memory up to about 74%. This lasted for some 15 minutes or so. I have 1GB RAM and 2GB of SWAP on my system. I noticed once the CPU had been released, if I clicked anywhere in the abrt window, such as to type some info or scroll the window, the CPU would again be tied up for some minutes.
An example of a very large backtrace for this bug can be seen in attachment 379287 [details] to bug 548849 - almost 200000 frames. (There are various dupes of that bug; many backtraces there are only partial; perhaps bugzilla refused/timedout the upload)
*** Bug 548058 has been marked as a duplicate of this bug. ***
Can you run something like: $ ps -A -o pid,comm,args,rss,vsz | sort -n -k4 | tail -20 so that it is visible which processes are especially big? Initial description says it was gdb which ate all the memory, comment #2 says it was python... do we have two bugs here? Note: abrtd uses python only for GUI, so if you see huge python process but abrt GUI was never started, then it shouldn't be abrt. But anyway, ps output is better than speculation... If it is gdb, abrt 1.0.4 will have a (largish) limit of frames in the backtrace. We reproduced near-OOM condition when gdb tries to produce a backtrace of SEGVed application which had infinite recursion.
The bug that was triggering this was actually an infinite recursion causing a stack overflow. So I think limiting the frames in the backtrace is the right fix here.
The backtrace depth has been limited to 3000 frames in git. (I think that can be lowered more, other devels disagree). Closing, reopen if abrt >= 1.0.5 still does this.