Bug 1451401 - [scale lab] OpenDaylight Karaf process consumes too much RSS memory
Summary: [scale lab] OpenDaylight Karaf process consumes too much RSS memory
Keywords:
Status: CLOSED DUPLICATE of bug 1512073
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: opendaylight
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 12.0 (Pike)
Assignee: Sridhar Gaddam
QA Contact: Itzik Brown
URL:
Whiteboard: scale_lab
Depends On: 1479264 1493558
Blocks: 1439320
TreeView+ depends on / blocked
 
Reported: 2017-05-16 14:53 UTC by Sai Sindhur Malleni
Modified: 2018-10-24 12:37 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
N/A
Last Closed: 2018-02-19 12:39:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Sai Sindhur Malleni 2017-05-16 14:53:14 UTC
Description of problem:
When running control plane tests at scale such as creating 500 routers, we see the Karaf process taking as much as 22G of RSS memory. This is really a problem as
1. We set heap size to 2G, which is not even close to the memory consumption we are seeing at scale
2. Controller on which ODL is running is going into OOM as garbage collection doesn't seem to be happening as intended even when given a large heap size

Version-Release number of selected component (if applicable):
10

How reproducible:
100% at scale

Steps to Reproduce:
1. Run rally at scale to create 2 networks, subnets and attach them to a router 500 times
2.
3.

Actual results:
RSS memory balloons to 22G when ODL is given a heap size of 24G. IF given a low heap size, we see the controller running ODL going into OOM

Expected results:


Additional info:
Related BZ https://bugzilla.redhat.com/show_bug.cgi?id=1439320

Comment 1 lpeer 2017-08-06 07:25:11 UTC
Sridhar - would you be able to update this bug after the next scale test in August 2017.

Comment 7 Sai Sindhur Malleni 2017-12-14 15:53:44 UTC
We have other bugs that have been fixed that fix this "umbrella" bug for the most part. This bug was initially opened against OSP10+Boron SR2 and since then several fixes have been made leading to lower memory usage in OSP12+Carbon SR2.

Can we close this bug Michael?

Comment 8 Michael Vorburger 2017-12-18 14:03:12 UTC
> Can we close this bug Michael?

yeah, this seems to be the first and oldest of a series of similar later bugs incl. e.g. Bug 1512073 around this topic (AFAIK there were even others, between this and that one?), and we seem to have forgotten about this one.. 

Some background about it e.g. on http://blog2.vorburger.ch/2017/09/how-to-find-transaction-related-memory.html and https://www.opendaylight.org/blog/2017/10/24/how-performance-testing-improved-the-nitrogen-release and full technical details on ODL's JIRA upstream bugs.

TL;DR is that we've indeed made signficant progress around OOM in ODL over the  last few months - and this bug can and should now be closed IMHO.

Comment 9 Mike Kolesnik 2018-02-19 12:39:00 UTC
Closing as duplicate of 1512073 per Michael's suggestion

*** This bug has been marked as a duplicate of bug 1512073 ***


Note You need to log in before you can comment on or make changes to this bug.