Bug 1958349 - ovn-controller doesn't release the memory after cluster-density run
Summary: ovn-controller doesn't release the memory after cluster-density run
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.8
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.10.0
Assignee: Tim Rozet
QA Contact: zhaozhanqi
URL:
Whiteboard: perfscale-ovn
Depends On: 1967882 1988565 2024768
Blocks: 1996751
TreeView+ depends on / blocked
 
Reported: 2021-05-07 17:07 UTC by Mohit Sheth
Modified: 2022-03-10 16:04 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1967882 1996751 (view as bug list)
Environment:
Last Closed: 2022-03-10 16:03:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2022:0056 0 None None None 2022-03-10 16:04:05 UTC

Description Mohit Sheth 2021-05-07 17:07:39 UTC
Description of problem:
The perfscale team ran cluster-density with 2000 projects at 100 node scale. The observation was that the ovn-controller memory grew to 10GB and stayed at 10GB even after the cleanup

Version-Release number of selected component (if applicable):
Cluster version is 4.8.0-0.nightly-2021-04-30-201824
sh-4.4# ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.15.0


How reproducible:
Always

Steps to Reproduce:
1. Run kube-burner cluster-density with 2k projects at 100 node scale

Actual results:
ovn-controller memory not freed/released to OS

Expected results:
memory should be released/freed

Additional info:
Must-gathers requested (VPN required) http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/msheth/may-7-must-gathers-ovn-controller-mem/

Comment 1 Dumitru Ceara 2021-05-11 08:58:58 UTC
This might be related to malloc fastbins not being consolidated and not
honoring M_TRIM_THRESHOLD (default 128KB) [0] [1].

One option would be to disable fast bins for ovn-controller although
that needs to be scale tested properly as it might affect performance.
This can be done at runtime without changing the OVN code by setting the 
GLIBC_TUNABLES before ovn-controller is started, e.g.:

GLIBC_TUNABLES=glibc.malloc.mxfast=0
export GLIBC_TUNABLES

[0] https://bugzilla.redhat.com/show_bug.cgi?id=921676
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=14827
[2] https://sourceware.org/bugzilla/show_bug.cgi?id=14827#c7

Comment 3 Jose Castillo Lema 2021-07-26 08:55:06 UTC
We are observing the same behaviour in a 120 node baremetal cluster.
Cluster density test was run on Friday and cleaned up on Friday. Since then, the cluster is in an idle state.
Today Monday, the openshift-ovn-kubernetes memory usage is 730 GiB with a request of 78.2 GiB.
CPU usage is 2.19 out of 5 requested.

  Env:
   - OCP 4.8.0-fc.9
   - local gateway
   - ovn2.13-20.12.0-25.el8fdp.x86_64

Comment 4 Murali Krishnasamy 2021-08-18 15:00:53 UTC
Observed this on OCP 4.7.11 as well, after running some node-density workloads with 2000 services.
The memory and CPU usage of OVN pods started increasing and new pod creation failed with error - CNI request timeout.

Comment 6 Tim Rozet 2022-01-07 14:34:33 UTC
Fix is present in 21.12.0-24.el8fdp

Comment 9 Kedar Kulkarni 2022-02-03 21:13:32 UTC
On the build 4.10.0-0.nightly-2022-02-02-131023 I ran cluster density tests for 500 iterations on 100 worker nodes, and monitored the OVN-controller memory usage, it showed that the usage spiked for a bit before it went down to normal levels. Which makes this bz resolved.

Comment 12 errata-xmlrpc 2022-03-10 16:03:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056


Note You need to log in before you can comment on or make changes to this bug.