Bug 1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages
Summary: pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.7
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.8.0
Assignee: Tim Rozet
QA Contact: Arti Sood
URL:
Whiteboard:
Depends On:
Blocks: 1942702
TreeView+ depends on / blocked
 
Reported: 2021-03-18 14:54 UTC by Tim Rozet
Modified: 2021-11-22 15:12 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1942702 (view as bug list)
Environment:
Last Closed: 2021-07-27 22:54:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ovn-org ovn-kubernetes pull 2105 0 None closed Fixes errors when adding NAT with existing entry 2021-03-18 15:01:17 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:54:48 UTC

Description Tim Rozet 2021-03-18 14:54:28 UTC
Description of problem:
Due to a regression in OVN, executing an lr-nat-del and lr-nat-add in a single transaction causes the following error:
[root@ovn-control-plane ~]# ovn-nbctl lr-nat-del GR_`hostname` snat 10.244.0.0/16 -- lr-nat-add GR_`hostname` snat 172.18.0.3 10.244.0.0/16
ovn-nbctl: 172.18.0.3, 10.244.0.0/16: a NAT with this external_ip and logical_ip already exists

The above error will show in the ovn-kube master logs when this problem is encountered.

This may happen in multiple places in the ovn-kubernetes code, including:
1. Initial gateway init, when the active ovnkube-master node is restarted
2. When a pod add happens where the SNAT entry may already exist for the gateway router

The consequences of this are that gateway nodes may not be correctly configured on bring up, as well as pods may fail to be added.

Comment 3 Tim Rozet 2021-03-24 14:23:08 UTC
Fixed by https://github.com/openshift/ovn-kubernetes/pull/472

Comment 5 Tim Rozet 2021-03-24 18:55:39 UTC
We found this is caused by a regression in OVN:
https://bugzilla.redhat.com/show_bug.cgi?id=1942707

Comment 11 W. Trevor King 2021-04-01 19:02:05 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z.  The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way.  Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug.  When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label.  The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
* example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
* example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time

What is the impact?  Is it serious enough to warrant blocking edges?
* example: Up to 2 minute disruption in edge routing
* example: Up to 90 seconds of API downtime
* example: etcd loses quorum and you have to restore from backup

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* example: Issue resolves itself after five minutes
* example: Admin uses oc to fix things
* example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* example: No, it has always been like this we just never noticed
* example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 12 Tim Rozet 2021-04-01 20:32:43 UTC
Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
All customers upgrading from any version to 4.7.3, or a customer launching 4.7.3 and rebooting their node.

What is the impact?  Is it serious enough to warrant blocking edges?
OVN gateway initialization will fail, and print a warning in the ovnkube-master log. The functional impact will be some pre-existing node port services may stop working. Also, some host -> service->host networked endpoints may fail, however this is unlikely.

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
If some node port services no longer function to the affected gateway, a user may recreate the node port services and that should remedy the problem.

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
Yes, OVN regression that only affects 4.7.3.

Comment 13 W. Trevor King 2021-05-18 04:08:16 UTC
Fixes went out in 4.7.6 [1] and 4.6.26 [2], and I haven't heard any noise about this since, so dropping UpgradeBlocker, but feel free to restore it if we want to revisit.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1942702#c10
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1945308#c5

Comment 16 errata-xmlrpc 2021-07-27 22:54:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438

Comment 17 Mehul Modi 2021-09-27 19:50:12 UTC
Added test case for this: https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-44980


Note You need to log in before you can comment on or make changes to this bug.