Bug 918761 - nova-manage causes machine to OOM
Summary: nova-manage causes machine to OOM
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 2.0 (Folsom)
Hardware: Unspecified
OS: Linux
low
high
Target Milestone: async
: 2.1
Assignee: David Ripton
QA Contact: Omri Hochman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-03-06 19:58 UTC by Jon Thomas
Modified: 2019-09-09 13:21 UTC (History)
6 users (show)

Fixed In Version: openstack-nova-2012.2.4-4.el6ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-05-09 13:54:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0798 0 normal SHIPPED_LIVE openstack-nova bug fix advisory 2013-05-09 17:51:29 UTC

Description Jon Thomas 2013-03-06 19:58:22 UTC
I did a packstack install and it hung. ps showed it was nova-manage setting up floating ips.  Looking at the answers file, I made a typo

CONFIG_NOVA_NETWORK_FLOATRANGE=192.168.2.224/2

instead of 

CONFIG_NOVA_NETWORK_FLOATRANGE=192.168.2.224/27

Anyway running the nova-manage outside of packstack consumes a large amount of memory which eventually looks like it would OOM's the machine. strace says its in a tight look on brk() 

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                
 5214 root      20   0 3705m 3.2g 2592 D  1.3 85.0   9:34.34 nova-manage 


--debug doesn't yield much

$ nova-manage --debug floating create 192.168.4.224/2
2013-03-06 14:41:54 DEBUG nova.utils [req-c9295070-60f5-4f59-856c-bb66665c8658 None None] backend <module 'nova.db.sqlalchemy.api' from '/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.pyc'> __get_backend /usr/lib/python2.6/site-packages/nova/utils.py:506

Comment 1 Jon Thomas 2013-03-06 20:00:56 UTC
# rpm -qa | grep nova
openstack-nova-common-2012.2.3-4.el6ost.noarch
openstack-nova-network-2012.2.3-4.el6ost.noarch
openstack-nova-cert-2012.2.3-4.el6ost.noarch
openstack-nova-scheduler-2012.2.3-4.el6ost.noarch
python-nova-2012.2.3-4.el6ost.noarch
openstack-nova-compute-2012.2.3-4.el6ost.noarch
openstack-nova-novncproxy-0.4-3.el6ost.noarch
openstack-nova-console-2012.2.3-4.el6ost.noarch
python-novaclient-2.10.0-4.el6ost.noarch
openstack-nova-api-2012.2.3-4.el6ost.noarch

Comment 3 Russell Bryant 2013-03-06 22:37:11 UTC
The offending code here is:

        ips = ({'address': str(address), 'pool': pool, 'interface': interface}
               for address in self.address_to_hosts(ip_range))
        try:
            db.floating_ip_bulk_create(admin_context, ips)
        except exception.FloatingIpExists as exc:
            # NOTE(simplylizz): Maybe logging would be better here
            # instead of printing, but logging isn't used here and I
            # don't know why.
            print('error: %s' % exc)
            sys.exit(1)


This can probably be fixed by just changing ips to be a generator instead of building up the entire list at once before calling the db function.

Comment 4 David Ripton 2013-04-02 15:41:02 UTC
This is now https://bugs.launchpad.net/nova/+bug/1163394 , assigned to me upstream.

Comment 5 David Ripton 2013-04-02 16:27:46 UTC
The upstream patch-for-review is https://review.openstack.org/25918

Comment 6 David Ripton 2013-04-02 21:07:39 UTC
The fix got into Havana upstream.  commit 34de8d1529fb9a2

I still need to backport it to RHOS.

Comment 10 Omri Hochman 2013-05-01 12:05:43 UTC
Verified with openstack-nova-2012.2.4-4: 

[root@puma01 /(keystone_admin)]$ nova-manage --debug floating create 192.168.4.224/2
Command failed, please check log for more info
2013-05-01 15:03:09 CRITICAL nova [req-64e4d334-425a-4941-9815-6cfea67e5155 None None] Invalid input received: Too many IP addresses will be generated.  Please increase /2 to reduce the number generated.

Comment 11 errata-xmlrpc 2013-05-09 13:54:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0798.html


Note You need to log in before you can comment on or make changes to this bug.