Description of problem: After deploying OCP 3.10.14, ovs Pods are being killed by the OOMKiller, specially at first start, after deploying the cluster.
We are hitting this issue  , which is marked as resolved, but clearly is not.
The limits of the DaemonSet that run the OVS Pods, are very small, can be found here  :
Clearly these limits are to low when running OCP on big baremetal nodes.
Version-Release number of the following components:
rpm -q openshift-ansible
rpm -q ansible
Steps to Reproduce:
Actual results: Even with  fix issue is still seen.
Expected results: The issue should not be seen.
3 different options :
1. Set bigger limits, 1000m and 1Gi?
2. Make the installer intelligent enough to calculate/tune these values based on the node size.
3. Allow to set them explicitly in the inventory.
* e.g. openshift_node_ovs_cpu_limit: 1000m, openshift_node_ovs_memory_limit: 1Gi
Related ovs bug :
> rpm -q openshift-ansible
> -q ansible
> ansible --version
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/jorget/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
> rpm -qa | grep openshift
> oc version
features: Basic-Auth GSSAPI Kerberos SPNEGO
We had a similar issue in https://bugzilla.redhat.com/show_bug.cgi?id=1571379 -- we fixed a bug where ovs-vswitchd was using 8 MiB per core.
This was merged here - https://github.com/openshift/openshift-ansible/pull/8166/commits/6d9ad9d1ac4c95ea38a8b1aa7d94ac698724c755
How many cores and how much ram does the node have?
Ideally we can actually clamp the memory usage, but if we're not able, we can add an override to openshift_ansible and give some guidelines.
https://bugzilla.redhat.com/show_bug.cgi?id=1620556#c2, here you are:
> free -h
total used free shared buff/cache available
Mem: 377G 8.9G 360G 27M 7.8G 367G
Swap: 0B 0B 0B
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model name: Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz
CPU MHz: 2600.000
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 22528K
NUMA node0 CPU(s): 0-7,32-39
NUMA node1 CPU(s): 8-15,40-47
NUMA node2 CPU(s): 16-23,48-55
NUMA node3 CPU(s): 24-31,56-63
Could I get:
rpm -qv openvswitch
Do the OVS pods get killed immediately, or does the OOM take some time? If you are able, could you grab:
(In reply to Dan Williams from comment #4)
> Could I get:
> rpm -qv openvswitch
As I already said, we are using OCP 3.10.14:
$ oc -n openshift-sdn rsh ovs-drtgw
$ rpm -qv openvswitch
Created attachment 1478888 [details]
ovs pid maps (cat /proc/`pidof ovs-vswitchd`/maps)
(In reply to Dan Williams from comment #5)
> Do the OVS pods get killed immediately, or does the OOM take some time? If
> you are able, could you grab:
I would same it takes some time, on a cluster with 8 nodes (All exactly the same HW, big baremetal), after the installer finishes OK, some of the OVS Pods started fine, while others were killed by OOMKiller repeatedly.
I first tried to delete manually and in order, the OVS and then the SDN pod of each of the impacted nodes, with no luck. After that I increased both Mem and CPU limits to 1CPU and 1Gi. After that, I deleted manually all OVS and SDN Pods, and they all started fine.
> /proc/`pidof ovs-vswitchd`/maps
I just attached the logs you are requesting, gathered inside one of the ovs Pods. Dont know if related, but keep in mind this log is from an OVS Pod running with 1CPU and 1Gi limit.
Could I get /proc/<pidof ovs-vswitchd>/smaps from the system?
Ping on this issue? smaps from ovs-vswitchd will help debug the issue further.
Created attachment 1485488 [details]
ovs pid smaps (cat /proc/<pidof ovs-vswitchd>/smaps)
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.