Description of problem:
Post update to 4.8.12, Egress IP's are not scheduled to any node via automatic namespace allocation using openshift-SDN network type.
Experiencing a total loss of all egress IP addresses (previously functional in 4.7.x pre-update) across all nodes - nodes are simply not having the egress IP allocated to the internal networks, despite validating egress cidr and egress IP allocation in automatic mode following guidance here:
Version-Release number of selected component (if applicable):
Always (We have manually created new projects and new egress addresses + removed SDN pods and allowed them to rebuild, egress IP's are assigned correctly in hostsubnet and netnamespace lists but do not appear when queried locally on the nodes with `$ ip -a | grep <missing_egress_ip>`
Steps to Reproduce:
1. Observe Egress IPs are allocated via get hostsubnet -o yaml and netnamespace -o yaml
2. Observe SDN pods running successfully/all nodes in READY status
3. Observe pods using internal IP traffic as if egress namespace IP was not there
4. ssh to node and observe egress IP is missing from network list.
- egress IPs assigned to namespace are allocated but not actually scheduled to any node. SDN pods do not throw errors about duplicate namespace allocation or incorrect CIDR
- egress IP missing from host node.
egress IP should be available/functional
This is a fairly massive cluster in dev with over 200 nodes, egress failure is preventing upgrade on prod cluster + impacting dev teams.
Linking case 03044075 to ticket. Have seen a second production cluster impacted similarly on 4.8.12 update with very similar issues currently being worked on. Linking also.
@jtanenba, first, apologies for the empty comments. i requested a fresh gather from the cu after the failures of last nite. that file bundle is available here: https://attachments.access.redhat.com/hydra/rest/cases/03044075/attachments/2cdf62bd-8f52-492a-b838-30e63a742fc7
As customer confirmed, he observes the following:
- openshift-sdn daemonset rollout fails to delete some of old pods
- if they delete old sdn daemon pods manually, new revision runs properly
- it heals egress traffic until some change happen (like allocating new egressip or new application namespace)
the following workaround seems to work:
- delete openshift-sdn pod on a node where egressip is allocated; this leads secondary ip address to appear on node interface
- delete openshift-sdn pod on a node where application pod is scheduled; this enables application traffic through egressip.
Looking forward for updates,
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z.
The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way.
Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug.
When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label.
The expectation is that the assignee answers these questions.
Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking?
* example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
* example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time
What is the impact? Is it serious enough to warrant blocking edges?
* example: Up to 2 minute disruption in edge routing
* example: Up to 90 seconds of API downtime
* example: etcd loses quorum and you have to restore from backup
How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* example: Issue resolves itself after five minutes
* example: Admin uses oc to fix things
* example: Admin must SSH to hosts, restore from backups, or other non standard admin activities
Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* example: No, it has always been like this we just never noticed
* example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1
> Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking?
All customers using egress IP with openshift-sdn on >= OCP 4.8.z. It doesn't matter if they are upgrading to that OCP version or installing a fresh cluster with that version.
> What is the impact? Is it serious enough to warrant blocking edges?
There is a particular code path that could lead to a goroutine (a dedicated egress IP one) to deadlock. This can happen across all nodes in all sdn pods on the cluster. Once that happens: the egress IP functionality will stop working and pods matching egress IPs might loose external connectivity altogether.
> How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
It can't be mitigated completely, and once it happens: the only (temporary) solution is to restart the SDN pod (experiencing the deadlock) on the node which is experiencing problems with its egress IP setup. That being said, having a more stable cluster environment might reduce the chances of experiencing the deadlock. Stable cluster environment here means: no node reboots/no egress IP changes/no changes to netNamespace/hostSubnet OpenShift resources
> Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
Yes, all 4.8.z streams versions are currently impacted by this, but 4.7 is fine. However, once upgrading from 4.7 to 4.8 there is an increased risk of exposure due to the introduced entropy on cluster because of the upgrade, which usually means all nodes will be rebooted multiple times and all pods on the cluster will be restarted at one point.
Based on the impact statement in comment 51, we've stopped recommending updates from 4.7 to any of the existing 4.8.z. Updates from 4.7 to 4.8 will be recommended again once bug 2014166 ships with a 4.8.z fix.
Linked BZ clone update:
https://bugzilla.redhat.com/show_bug.cgi?id=2014166 has been closed as ERRATA, linked to available update: 4.8.17
https://bugzilla.redhat.com/show_bug.cgi?id=2013707 has been closed as ERRATA, linked to available update: 4.9.4
*** Bug 2003634 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.