Description of problem: Due to a regression in OVN, executing an lr-nat-del and lr-nat-add in a single transaction causes the following error: [root@ovn-control-plane ~]# ovn-nbctl lr-nat-del GR_`hostname` snat 10.244.0.0/16 -- lr-nat-add GR_`hostname` snat 172.18.0.3 10.244.0.0/16 ovn-nbctl: 172.18.0.3, 10.244.0.0/16: a NAT with this external_ip and logical_ip already exists The above error will show in the ovn-kube master logs when this problem is encountered. This may happen in multiple places in the ovn-kubernetes code, including: 1. Initial gateway init, when the active ovnkube-master node is restarted 2. When a pod add happens where the SNAT entry may already exist for the gateway router The consequences of this are that gateway nodes may not be correctly configured on bring up, as well as pods may fail to be added.
Fixed by https://github.com/openshift/ovn-kubernetes/pull/472
We found this is caused by a regression in OVN: https://bugzilla.redhat.com/show_bug.cgi?id=1942707
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug. When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label. The expectation is that the assignee answers these questions. Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? * example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet * example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time What is the impact? Is it serious enough to warrant blocking edges? * example: Up to 2 minute disruption in edge routing * example: Up to 90 seconds of API downtime * example: etcd loses quorum and you have to restore from backup How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? * example: Issue resolves itself after five minutes * example: Admin uses oc to fix things * example: Admin must SSH to hosts, restore from backups, or other non standard admin activities Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? * example: No, it has always been like this we just never noticed * example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1
Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? All customers upgrading from any version to 4.7.3, or a customer launching 4.7.3 and rebooting their node. What is the impact? Is it serious enough to warrant blocking edges? OVN gateway initialization will fail, and print a warning in the ovnkube-master log. The functional impact will be some pre-existing node port services may stop working. Also, some host -> service->host networked endpoints may fail, however this is unlikely. How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? If some node port services no longer function to the affected gateway, a user may recreate the node port services and that should remedy the problem. Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? Yes, OVN regression that only affects 4.7.3.
Fixes went out in 4.7.6 [1] and 4.6.26 [2], and I haven't heard any noise about this since, so dropping UpgradeBlocker, but feel free to restore it if we want to revisit. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1942702#c10 [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1945308#c5
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438
Added test case for this: https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-44980