Description of problem: Creating 18 application pods/worker with in a 120 node environment results in 2135 lr policies of type 501 out of a total of 2613 policies: [kni@e16-h18-b03-fc640 kubeburner]$ oc exec -n openshift-ovn-kubernetes -it $POD -- ovn-nbctl lr-policy-list ovn_cluster_router | wc -l Defaulting container name to northd. Use 'oc describe pod/ovnkube-master-82fjr -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2613 [kni@e16-h18-b03-fc640 kubeburner]$ oc exec -n openshift-ovn-kubernetes -it $POD -- ovn-nbctl lr-policy-list ovn_cluster_router | grep 501 | wc -l Defaulting container name to northd. Use 'oc describe pod/ovnkube-master-82fjr -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2135 Version-Release number of selected component (if applicable): - OCP 4.7.11 - local gateway How reproducible: 100% Steps to Reproduce: 1. Create several serving (lb) pods with proper ICNI2 annotations (40 in our scenario) 2. Create 18 app pods/worker
Surya, You had a patch to address this right?
yes let me rebase that PR now and we can get it in.
Moving this to post because the fix is merged upstream and downstream fix is https://github.com/openshift/ovn-kubernetes/pull/796.
Downstream merge got in. Moving this to modified state.
Hey Yurii, Yes that looks good. Thanks for verifying this. Moving bz to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056