Description of problem: [OVN] EgressIP does NOT take effect on latest nightly builds. Version-Release number of the following components: 4.6.0-0.nightly-2020-09-21-030155 How reproducible: Always Steps to reproduce: 1. oc label node compute-0 "k8s.ovn.org/egress-assignable"="" node/compute-0 labeled 2. Apply egressIP config file. oc get egressip NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egressip 136.144.52.215 compute-0 136.144.52.215 huiran-mac:script hrwang$ oc get egressip -o yaml apiVersion: v1 items: - apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: creationTimestamp: "2020-09-22T02:00:55Z" generation: 3 managedFields: - apiVersion: k8s.ovn.org/v1 fieldsType: FieldsV1 fieldsV1: f:spec: .: {} f:egressIPs: {} f:namespaceSelector: .: {} f:matchLabels: .: {} f:team: {} manager: oc operation: Update time: "2020-09-22T02:00:55Z" - apiVersion: k8s.ovn.org/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:podSelector: {} f:status: .: {} f:items: {} manager: ovnkube operation: Update time: "2020-09-22T02:01:57Z" name: egressip resourceVersion: "58065" selfLink: /apis/k8s.ovn.org/v1/egressips/egressip uid: 4099b65e-1334-4c0d-8bb8-ad6e11409739 spec: egressIPs: - 136.144.52.215 namespaceSelector: matchLabels: team: red podSelector: {} status: items: - egressIP: 136.144.52.215 node: compute-0 kind: List metadata: resourceVersion: "" selfLink: "" 3. Create ns test and pods in it. oc get pods -n test NAME READY STATUS RESTARTS AGE hello-pod 1/1 Running 0 21m 4. Label ns team=red oc get ns test --show-labels NAME STATUS AGE LABELS test Active 22m team=red 5. From test pod to access outside oc rsh -n test hello-pod / # curl ifconfig.me 136.144.52.213/ # oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME compute-0 Ready worker 53m v1.19.0+7f9e863 136.144.52.210 136.144.52.210 Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa) 4.18.0-193.23.1.el8_2.x86_64 cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8 compute-1 Ready worker 53m v1.19.0+7f9e863 136.144.52.213 136.144.52.213 Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa) 4.18.0-193.23.1.el8_2.x86_64 cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8 control-plane-0 Ready master 63m v1.19.0+7f9e863 136.144.52.211 136.144.52.211 Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa) 4.18.0-193.23.1.el8_2.x86_64 cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8 control-plane-1 Ready master 63m v1.19.0+7f9e863 136.144.52.214 136.144.52.214 Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa) 4.18.0-193.23.1.el8_2.x86_64 cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8 control-plane-2 Ready master 64m v1.19.0+7f9e863 136.144.52.196 136.144.52.196 Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa) 4.18.0-193.23.1.el8_2.x86_64 cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8 Actual Result: The pods used the node IP, not egress IP 02:02:25.555597 1 egressip.go:194] Unable to add pod: test/hello-pod to EgressIP: egressip, err: unable to create logical router policy for status: {compute-0 136.144.52.215}, err: unable to retrieve node's: compute-0 gateway IP, err: timed out waiting for the condition Expected Result: EgressIp works well.
PR: https://github.com/openshift/ovn-kubernetes/pull/279 merged last night, it contains the egress IP fixes. So moving to MODIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196