Description of problem: After scaling a machineset from 6, back to 0, then to 6, nodes are destroyed and recreated. Northd streams warnings: $ oc logs -n openshift-ovn-kubernetes ovnkube-master-xxxxx --tail 4 -c northd 2022-04-09T08:17:32.480Z|01630|ovn_northd|WARN|Dropped 971 log messages in last 61 seconds (most recently, 1 seconds ago) due to excessive rate 2022-04-09T08:17:32.485Z|01631|ovn_northd|WARN|Chassis does not exist for Chassis_Private record, name: 14c78754-ba60-4ab8-a8db-ce0ca34baf5a 2022-04-09T08:18:35.499Z|01632|ovn_northd|WARN|Dropped 989 log messages in last 63 seconds (most recently, 5 seconds ago) due to excessive rate 2022-04-09T08:18:35.505Z|01633|ovn_northd|WARN|Chassis does not exist for Chassis_Private record, name: 14c78754-ba60-4ab8-a8db-ce0ca34baf5a Indeed the chassis are cleaned up the chassis_private record is still there: $ oc exec -ti -n openshift-ovn-kubernetes ovnkube-master-xxxxx -c sbdb -- ovn-sbctl list chassis_private 14c78754-ba60-4ab8-a 8db-ce0ca34baf5a _uuid : e76fe362-2223-4dc2-b73c-18c7d1ee783b chassis : [] external_ids : {} name : "14c78754-ba60-4ab8-a8db-ce0ca34baf5a" nb_cfg : 0 nb_cfg_timestamp : 0 Version-Release number of selected component (if applicable): Openshift: 4.10.6 Provider: Azure CNI: OVNKubernetes How reproducible: 100% Steps to Reproduce: 1. Deploy ocp 4.10.6 on Azure 2. Scale down machineset to 0 then back to initial value 3. OVN warns "Chassis does not exist for Chassis_Private record" Actual results: OVN warns "Chassis does not exist for Chassis_Private record" Expected results: OVN should clean up all unneeded resources Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069