Bug 1940498

Summary: pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages
Product: OpenShift Container Platform Reporter: Tim Rozet <trozet>
Component: NetworkingAssignee: Tim Rozet <trozet>
Networking sub component: ovn-kubernetes QA Contact: Arti Sood <asood>
Status: CLOSED ERRATA Docs Contact:
Severity: urgent    
Priority: urgent CC: asood, dcbw, memodi, rbrattai, wking
Version: 4.7   
Target Milestone: ---   
Target Release: 4.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1942702 (view as bug list) Environment:
Last Closed: 2021-07-27 22:54:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1942702    

Description Tim Rozet 2021-03-18 14:54:28 UTC
Description of problem:
Due to a regression in OVN, executing an lr-nat-del and lr-nat-add in a single transaction causes the following error:
[root@ovn-control-plane ~]# ovn-nbctl lr-nat-del GR_`hostname` snat 10.244.0.0/16 -- lr-nat-add GR_`hostname` snat 172.18.0.3 10.244.0.0/16
ovn-nbctl: 172.18.0.3, 10.244.0.0/16: a NAT with this external_ip and logical_ip already exists

The above error will show in the ovn-kube master logs when this problem is encountered.

This may happen in multiple places in the ovn-kubernetes code, including:
1. Initial gateway init, when the active ovnkube-master node is restarted
2. When a pod add happens where the SNAT entry may already exist for the gateway router

The consequences of this are that gateway nodes may not be correctly configured on bring up, as well as pods may fail to be added.

Comment 3 Tim Rozet 2021-03-24 14:23:08 UTC
Fixed by https://github.com/openshift/ovn-kubernetes/pull/472

Comment 5 Tim Rozet 2021-03-24 18:55:39 UTC
We found this is caused by a regression in OVN:
https://bugzilla.redhat.com/show_bug.cgi?id=1942707

Comment 11 W. Trevor King 2021-04-01 19:02:05 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z.  The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way.  Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug.  When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label.  The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
* example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
* example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time

What is the impact?  Is it serious enough to warrant blocking edges?
* example: Up to 2 minute disruption in edge routing
* example: Up to 90 seconds of API downtime
* example: etcd loses quorum and you have to restore from backup

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* example: Issue resolves itself after five minutes
* example: Admin uses oc to fix things
* example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* example: No, it has always been like this we just never noticed
* example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 12 Tim Rozet 2021-04-01 20:32:43 UTC
Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
All customers upgrading from any version to 4.7.3, or a customer launching 4.7.3 and rebooting their node.

What is the impact?  Is it serious enough to warrant blocking edges?
OVN gateway initialization will fail, and print a warning in the ovnkube-master log. The functional impact will be some pre-existing node port services may stop working. Also, some host -> service->host networked endpoints may fail, however this is unlikely.

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
If some node port services no longer function to the affected gateway, a user may recreate the node port services and that should remedy the problem.

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
Yes, OVN regression that only affects 4.7.3.

Comment 13 W. Trevor King 2021-05-18 04:08:16 UTC
Fixes went out in 4.7.6 [1] and 4.6.26 [2], and I haven't heard any noise about this since, so dropping UpgradeBlocker, but feel free to restore it if we want to revisit.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1942702#c10
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1945308#c5

Comment 16 errata-xmlrpc 2021-07-27 22:54:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438

Comment 17 Mehul Modi 2021-09-27 19:50:12 UTC
Added test case for this: https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-44980