This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 453392 - virt-manager.py is taking all the memory!
virt-manager.py is taking all the memory!
Status: CLOSED NEXTRELEASE
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: virt-manager (Show other bugs)
5.2
i386 Linux
low Severity high
: rc
: ---
Assigned To: Cole Robinson
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-06-30 06:52 EDT by jean-sebastien Hubert
Modified: 2009-12-14 16:18 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-01-16 09:41:20 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Add support for bonding and vlan devices (also plugs memory leak) (10.71 KB, application/octet-stream)
2008-09-02 10:08 EDT, Cole Robinson
no flags Details

  None (edit)
Description jean-sebastien Hubert 2008-06-30 06:52:17 EDT
Description of problem:
Run virt-manager.py one day or a couple of hours (connect it to a host):
It take more and more memory and may overload the system.

Version-Release number of selected component (if applicable):
virt-manager-0.5.3-8.el5

How reproducible:
Always

Steps to Reproduce:
1.Lauchn virt-manager
2.Connect it to a host
3.Lets run a couple of hours

  
Actual results:
[root@xenhost1 ~]# date
lun jun 30 14:22:19 RET 2008
[root@xenhost1 ~]# ps -auxw --sort=rss | grep virt-manager
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ
root     10770  0.0  0.1   3952   732 pts/2    S+   14:22   0:00 grep virt-manager
root      3493  1.1 77.0 668692 557352 ?       Ss   Jun29  18:36 python
/usr/share/virt-manager/virt-manager.py
It may crash the host server

Expected results:
Not leak memory

Additional info:
Comment 1 Cole Robinson 2008-07-09 14:37:26 EDT
Hi, this is a previously reported issue but we are tracking progress in a
private bug. I think we have a working patch though, so I'll keep you informed.
Comment 2 Binbin Wang 2008-08-04 04:59:21 EDT
One of customer in China also have the same problem!

The output of ps command.
root     16256  5.8 75.4 6768480 6042652 ?     Ss   Jul23 1017:25 python /usr/share/virt-manager/virt-manager.py

any patch or workaround method?
Comment 3 Jonathan Kamens 2008-08-06 22:07:32 EDT
Is the the bug which causes me to see ridiculous %CPU values from procps with fedora rawhide (procps-3.2.7-20.fc9.i386), or is that a different bug?

For example:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 3168 root      20   0 96852  20m 7748 S 424.4  1.1 363:21.90 Xorg              
 3769 jik       20   0 25976 8888 7532 S 284.5  0.4  37:36.81 multiload-apple   
 3830 jik       27   7 74204  30m  12m S 141.9  1.6  60:44.16 beagled           
 2804 haldaemo  20   0  7128 4600 3932 S 41.2  0.2   5:24.25 hald               
 4839 jik       20   0  2560 1092  828 R  1.3  0.1   0:00.10 top
Comment 4 Jonathan Kamens 2008-08-06 22:07:58 EDT
Oh, Damn, never mind, I put that comment in the wrong ticket.
Comment 5 Daniel Senie 2008-09-02 09:48:08 EDT
We see this as well when we leave Virtual Machine Manager open on a machine for an extended time. The machine runs out of memory, and problems start cropping up. Interestingly, it's another virtualization item that then complains via email (due to a cron job failing). Only the subject line of the email and the contents are included here:

Subject: Cron <root@briar04> python /usr/share/rhn/virtualization/poller.py

Traceback (most recent call last):
  File "/usr/share/rhn/virtualization/poller.py", line 213, in ?
    debug = options and options.debug)
  File "/usr/share/rhn/virtualization/poller_state_cache.py", line 50, in __init__
    self._load_state()
  File "/usr/share/rhn/virtualization/poller_state_cache.py", line 123, in _load_state
    except PickleError, pe:
NameError: global name 'PickleError' is not defined



I've had to reboot servers that have gotten into this state. Haven't figured out what services to kick to avoid reboot.
Comment 6 Cole Robinson 2008-09-02 10:08:29 EDT
Created attachment 315545 [details]
Add support for bonding and vlan devices (also plugs memory leak)

The attached patch fixes the memory leak. It is being tracked by a private bug to add support for bonding and vlan devices for bridges, and also happens to fix the leak :)

This will be in 5.3, but here is the patch in the interim.
Comment 7 Cole Robinson 2008-09-16 19:51:33 EDT
FYI, this fix has been committed and built. I'm just going to move this bug to ASSIGNED and leave it open until 5.3 is out, at which point I'll close it. For anyone with the proper access, the private bug we are using to track this is 443604.
Comment 8 Cole Robinson 2008-09-16 19:52:18 EDT
Ah sorry, the actual bug is 443680.
Comment 9 Cole Robinson 2009-01-16 09:41:20 EST
Okay, fix is built and pending release for 5.3, so I am closing this bug.
Comment 10 Dave Oksner 2009-07-30 19:58:20 EDT
So, am I missing something, or was this left out of RH EL 5.3?  I have rhn-virtualization-host-1.0.1-55 installed.  It appears that this is the latest version and that it is from January 2008, before this bug was opened.

And, we're seeing the exact same messages as Daniel Senie reported in comment #5, or I wouldn't have come looking for an answer. :-)
Comment 11 Chris Lalancette 2009-07-31 08:32:52 EDT
So, the patch in the private BZ was committed to RHEL-5.3, so this issue should be fixed.  What version of virt-manager do you have installed, exactly?

Cole, do you have anything else to add here?

Chris Lalancette
Comment 12 Cole Robinson 2009-07-31 09:46:00 EDT
The traceback in comment #5 looks like an RHN bug, so Dave (Comment #10) should file a bug with them. I can pretty much guarantee that the original bug (memory leak) was fixed in RHEL5.3.
Comment 13 Dave Oksner 2009-08-04 12:22:08 EDT
Okay, thanks.  I'll try to track down what went wrong and where.

Note You need to log in before you can comment on or make changes to this bug.