Bug 247859 - "yum update" segmentation fault after "Cannot allocate memory"
Summary: "yum update" segmentation fault after "Cannot allocate memory"
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: yum
Version: rawhide
Hardware: All
OS: Linux
low
low
Target Milestone: ---
Assignee: Jeremy Katz
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-07-11 18:47 UTC by Kai Engert (:kaie) (inactive account)
Modified: 2014-01-21 22:58 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-07-11 20:56:37 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Kai Engert (:kaie) (inactive account) 2007-07-11 18:47:30 UTC
Description of problem:
This is a rawhide x86_64 xen guest, 1 GB memory, running on a RHEL 5 based host OS.
I ran "yum update" after not having done so for a while (not that prior to the
global yum update I ran "yum update selinux-policy-targeted" in order to verify
bug 247206 (see that bug)).

Eventually the update aborted with the following error messages:

...
  Cleanup   : e2fsprogs-libs               ################### [ 968/1105]
  Cleanup   : evolution                    ################### [ 969/1105]
  Cleanup   : ORBit2                       ################### [ 970/1105]
error: Couldn't fork %postun: Cannot allocate memory
  Cleanup   : file-libs                    ################### [ 971/1105]
error: Couldn't fork %postun: Cannot allocate memory
  Cleanup   : pkgconfig                    ################### [ 972/1105]
  Cleanup   : libgsf                       ################### [ 973/1105]
error: Couldn't fork %postun: Cannot allocate memory
  Cleanup   : pygtk2                       ################### [ 974/1105]
  Cleanup   : nautilus-extensions          ################### [ 975/1105]
  Cleanup   : freetype-devel               ################### [ 976/1105]
Segmentation fault
[root@kaiexenrawhide ~]#
[root@kaiexenrawhide ~]#
[root@kaiexenrawhide ~]#
[root@kaiexenrawhide ~]#
[root@kaiexenrawhide ~]# uptime
 20:40:03 up  1:54,  1 user,  load average: 3.90, 3.85, 3.64
[root@kaiexenrawhide ~]# cat /proc/meminfo
MemTotal:      1033252 kB
MemFree:        257320 kB
Buffers:        145960 kB
Cached:         186640 kB
SwapCached:     217936 kB
Active:         558480 kB
Inactive:       110360 kB
SwapTotal:     1048568 kB
SwapFree:       709448 kB
Dirty:             604 kB
Writeback:           0 kB
AnonPages:      334164 kB
Mapped:          13196 kB
Slab:            54660 kB
SReclaimable:    29000 kB
SUnreclaim:      25660 kB
PageTables:       4332 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:   1565192 kB
Committed_AS:   526732 kB
VmallocTotal: 34359738367 kB
VmallocUsed:      3020 kB
VmallocChunk: 34359733307 kB

Comment 1 James Antill 2007-07-11 20:56:37 UTC
 I'd bet it's really Python that just died due to ENOMEM, although why that
happened isn't obvious is the machine doing something else taking a lot of RAM
... as assuming yum was in the high multi-100MB area there doesn't seem a lot of
memory free.

 Assuming nothing obviously weird is going on the only real solution to this I
can see is to have yum do a lot of small transactions, instead of one big one.
Which might eventually be done upstream.
 There are also a bunch of people working on getting the memory usage down in
general ... but if you are updating 500+ packages you might well still have
problems.


Comment 2 Seth Vidal 2007-07-12 00:06:27 UTC
Can you replicate this? b/c I'd love to see when it explodes in size.

Comment 3 Kai Engert (:kaie) (inactive account) 2007-07-12 00:10:52 UTC
(In reply to comment #2)
> Can you replicate this? b/c I'd love to see when it explodes in size.

I don't have that old snapshot any longer, but if the theory is correct and it's
indeed caused by updating too many packages in one step....

then all you need is to install an old snapshot of rawhide and run yum update?

I was using a 1 GB xen guest, only allocate 512 MB and you should be able to see
the bug more easily?


(In reply to comment #1)
>  is the machine doing something else taking a lot of RAM

This xen guest is a plain installation, there was nothing running on the system
besides the default daemons. I was logged in using ssh only, no X session running.

Comment 4 Seth Vidal 2007-07-12 02:28:05 UTC
my problem is I can't replicate it. I've done large updates on 512M xen
instances and they've worked pretty well. No -ENOMEM at least. That's why I'm
curious if something in particular was blowing up ram.


Comment 5 James Antill 2007-07-12 13:55:21 UTC
 Well I'd assumed something else was running, taking up RAM, because right after
yum dies:

MemTotal:      1033252 kB
MemFree:        257320 kB
Buffers:        145960 kB
Cached:         186640 kB

...AIUI this means something is taking ~450MB. Also:

SwapCached:     217936 kB
SwapTotal:     1048568 kB
SwapFree:       709448 kB

...implies something (probably the same thing) is _still_ taking ~100MB of swap,
and swap had been upto ~300MB (implying that yum was taking ~450MB).


Comment 6 Kai Engert (:kaie) (inactive account) 2007-07-12 19:46:56 UTC
If there was a lot of memory being used, something must have allocated it during
the system update. The following is from after booting.

Sorry, I can't find out what had been consuming that additional memory. I should
have been smart enough to make a dump of processes and their consumption after
the out-of-memory errors.


[root@kaiexenrawhide ~]# cat /proc/meminfo
MemTotal:      1033252 kB
MemFree:        774544 kB
Buffers:         11004 kB
Cached:         119472 kB
SwapCached:          0 kB
Active:          83988 kB
Inactive:       101820 kB
SwapTotal:     1048568 kB
SwapFree:      1048568 kB
Dirty:              20 kB
Writeback:           0 kB
AnonPages:       55340 kB
Mapped:          11640 kB
Slab:            20876 kB
SReclaimable:     8164 kB
SUnreclaim:      12712 kB
PageTables:       3976 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:   1565192 kB
Committed_AS:   140404 kB
VmallocTotal: 34359738367 kB
VmallocUsed:      3008 kB
VmallocChunk: 34359734343 kB



Note You need to log in before you can comment on or make changes to this bug.