Bug 2088040 - [4.10.z backport] [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet
Summary: [4.10.z backport] [OVN] ovn-northd doesn't clean Chassis_Private record after...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.10
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.10.z
Assignee: Jaime Caamaño Ruiz
QA Contact: Arti Sood
URL:
Whiteboard:
: 2084173 (view as bug list)
Depends On: 2074009
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-18 17:37 UTC by Jaime Caamaño Ruiz
Modified: 2022-06-07 13:24 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-07 13:24:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift ovn-kubernetes pull 1099 0 None open [release-4.10] Bug 2088040: Delete ChassisPrivate along with Chassis 2022-05-18 17:49:44 UTC
Red Hat Product Errata RHBA-2022:4882 0 None None None 2022-06-07 13:24:51 UTC

Description Jaime Caamaño Ruiz 2022-05-18 17:37:10 UTC
This bug was initially created as a copy of Bug #2074009

I am copying this bug because: 



Description of problem:

After scaling a machineset from 6, back to 0, then to 6, nodes are destroyed and recreated. Northd streams warnings:

$ oc logs -n openshift-ovn-kubernetes ovnkube-master-xxxxx --tail 4 -c northd
2022-04-09T08:17:32.480Z|01630|ovn_northd|WARN|Dropped 971 log messages in last 61 seconds (most recently, 1 seconds ago) due to excessive rate
2022-04-09T08:17:32.485Z|01631|ovn_northd|WARN|Chassis does not exist for Chassis_Private record, name: 14c78754-ba60-4ab8-a8db-ce0ca34baf5a
2022-04-09T08:18:35.499Z|01632|ovn_northd|WARN|Dropped 989 log messages in last 63 seconds (most recently, 5 seconds ago) due to excessive rate
2022-04-09T08:18:35.505Z|01633|ovn_northd|WARN|Chassis does not exist for Chassis_Private record, name: 14c78754-ba60-4ab8-a8db-ce0ca34baf5a

Indeed the chassis are cleaned up the chassis_private record is still there:
$ oc exec -ti -n openshift-ovn-kubernetes ovnkube-master-xxxxx  -c sbdb -- ovn-sbctl list chassis_private 14c78754-ba60-4ab8-a
8db-ce0ca34baf5a
_uuid               : e76fe362-2223-4dc2-b73c-18c7d1ee783b
chassis             : []
external_ids        : {}
name                : "14c78754-ba60-4ab8-a8db-ce0ca34baf5a"
nb_cfg              : 0
nb_cfg_timestamp    : 0


Version-Release number of selected component (if applicable):
Openshift:  4.10.6
Provider:   Azure
CNI:        OVNKubernetes

How reproducible:
100%

Steps to Reproduce:
1. Deploy ocp 4.10.6 on Azure
2. Scale down machineset to 0 then back to initial value
3. OVN warns "Chassis does not exist for Chassis_Private record"

Actual results:
OVN warns "Chassis does not exist for Chassis_Private record"

Expected results:
OVN should clean up all unneeded resources

Additional info:

Comment 3 Jaime Caamaño Ruiz 2022-06-01 14:17:56 UTC
*** Bug 2084173 has been marked as a duplicate of this bug. ***

Comment 7 errata-xmlrpc 2022-06-07 13:24:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.10.17 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:4882


Note You need to log in before you can comment on or make changes to this bug.