Description of problem: Egress IP with openshift sdn in not functional on worker node, We are able to map the egress IP on the impacted node but after some time, We can check the egress ip details in "oc get hostsubnet" output, but the IP is not present with node primary interface. We get following errors in the node sdn pod logs. Node 10.78.46.24 is offline 2022-03-08T10:20:25.055872094Z W0308 10:20:25.055771 4118 egressip.go:242] Node 10.78.46.24 is offline Also when this error messages are appeared in the sdn logs, egress ip is not moved to another node. We are getting below error with insight rules. "Node %s may be offline... retrying" appears in the sdn-controller log more than 5 times a minute for all nodes combined: Version-Release number of selected component (if applicable): ocp 4.8.28 How reproducible: Its reproducible in CU environment on the impacted node. Steps to Reproduce: 1. 2. 3. Actual results: Egress IP in not functional. Expected results: It should work as expected with automatic CIDR. Additional info: Must gather and sosreport is captured at the time of issue is available in support shell with following names. drwxrwxrwx. 3 yank yank 76 Mar 9 07:57 0280-sosreport-jprocpuatapp01-0296610022-2022-03-09-gfcftrd.tar.xz drwxrwxrwx. 3 yank yank 59 Mar 9 08:00 0290-must-gather.local.1936115711644553329.tar.xz
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069