Description of problem: We see at scale of 300 nodes in a steady state cluster that ovn-controller is running anywhere between 20-30 second poll intervals. According to Dumitru the cause is a change to any SB datapath triggeringrecompute of the ct_zones and physical_flow_changes https://github.com/ovn-org/ovn/commit/f9cab11d5fabe2ae321a3b4bad5972b61df958c0 https://github.com/ovn-org/ovn/commit/f9cab11d5fabe2ae321a3b4bad5972b61df958c0#diff-220cd89c1bf69b5cf68c6e9ea377[…]61c58aaf871f29f13d8cccd6cff1R2392 2021-05-19T18:27:02Z|03612|inc_proc_eng|DBG|node: ct_zones, recompute (triggered) 2021-05-19T18:27:02Z|03613|inc_proc_eng|DBG|controller/ovn-controller.c:1590: node: ct_zones, old_state Stale, new_state Updated 2021-05-19T18:27:02Z|03614|inc_proc_eng|DBG|node: physical_flow_changes, handle change for input ct_zones 2021-05-19T18:27:02Z|03615|inc_proc_eng|DBG|node: physical_flow_changes, can't handle change for input ct_zones, fall back to recompute 2021-05-19T18:27:02Z|03616|inc_proc_eng|DBG|node: physical_flow_changes, recompute (triggered)
Also including Han's fix into this, which will greatly reduce the number of openflow flows per ovn-controller and lower the time it takes to recompute.
Hi, I tested this bz on 4.9.0-0.nightly-2021-08-31-123131 cluster with 300 worker nodes, gathered 2 must gather logs, 1 as soon as cluster was ready and another after ~30 mins, I did not see the errors mentioned in the original comment in ovn-controller pod logs. Thanks, KK.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759