+++ This bug was initially created as a clone of Bug #1533153 +++ Description of problem: After a "simple" restart of a node (such that it doesn't need to re-setup the SDN), newly-created egress IPs won't work correctly Steps to Reproduce: 1. Start a cluster 2. On one of the nodes, "systemctl restart atomic-openshift-node" 3. Add an auto-egress-IP to that node Actual results: Newly-added egress IP doesn't work, and there's an ovs-related error in the journal for that node Expected results: Works --- Additional comment from Dan Winship on 2018-01-10 10:36:48 EST --- https://github.com/openshift/origin/pull/18049 fixes this for master. I'll update when there's a backport to 3.7.
fyi I was waiting for https://github.com/openshift/origin/pull/18121 to merge so I could backport them together rather than having to two separate bugs
https://github.com/openshift/ose/pull/1042
Tested on OCP v3.7.29 The egress IP will work well after the node service restarted on egress node. And the egress IP works well when the order of OPENSHIFT-MASQUERADE and KUBE-POSTROUING switched.
*** Bug 1547008 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0636