Bug 1881269 - [OVN] EgressIP does NOT take effect on latest nightly builds.
Summary: [OVN] EgressIP does NOT take effect on latest nightly builds.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.6.0
Assignee: Alexander Constantinescu
QA Contact: huirwang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-22 02:34 UTC by huirwang
Modified: 2020-10-27 16:43 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:43:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ovn-org ovn-kubernetes pull 1677 0 None closed Fix egress IP for new local gateway mode 2021-02-15 10:54:43 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:43:58 UTC

Description huirwang 2020-09-22 02:34:39 UTC
Description of problem:
[OVN] EgressIP does NOT take effect on latest nightly  builds.

Version-Release number of the following components: 
4.6.0-0.nightly-2020-09-21-030155


How reproducible: 
Always 


Steps to reproduce:
1.  oc label node compute-0  "k8s.ovn.org/egress-assignable"=""
node/compute-0 labeled

2. Apply egressIP config file.

oc get egressip
NAME       EGRESSIPS        ASSIGNED NODE   ASSIGNED EGRESSIPS
egressip   136.144.52.215   compute-0       136.144.52.215
huiran-mac:script hrwang$ oc get egressip -o yaml
apiVersion: v1
items:
- apiVersion: k8s.ovn.org/v1
  kind: EgressIP
  metadata:
    creationTimestamp: "2020-09-22T02:00:55Z"
    generation: 3
    managedFields:
    - apiVersion: k8s.ovn.org/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          .: {}
          f:egressIPs: {}
          f:namespaceSelector:
            .: {}
            f:matchLabels:
              .: {}
              f:team: {}
      manager: oc
      operation: Update
      time: "2020-09-22T02:00:55Z"
    - apiVersion: k8s.ovn.org/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:podSelector: {}
        f:status:
          .: {}
          f:items: {}
      manager: ovnkube
      operation: Update
      time: "2020-09-22T02:01:57Z"
    name: egressip
    resourceVersion: "58065"
    selfLink: /apis/k8s.ovn.org/v1/egressips/egressip
    uid: 4099b65e-1334-4c0d-8bb8-ad6e11409739
  spec:
    egressIPs:
    - 136.144.52.215
    namespaceSelector:
      matchLabels:
        team: red
    podSelector: {}
  status:
    items:
    - egressIP: 136.144.52.215
      node: compute-0
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


3. Create ns test and pods in it.
 oc get pods -n test
NAME        READY   STATUS    RESTARTS   AGE
hello-pod   1/1     Running   0          21m
4. Label ns team=red
oc get ns test --show-labels
NAME   STATUS   AGE   LABELS
test   Active   22m   team=red
5. From test pod to access outside
oc rsh -n test hello-pod
/ # curl ifconfig.me
136.144.52.213/ # 


 oc get nodes -o wide
NAME              STATUS   ROLES    AGE   VERSION           INTERNAL-IP      EXTERNAL-IP      OS-IMAGE                                                       KERNEL-VERSION                 CONTAINER-RUNTIME
compute-0         Ready    worker   53m   v1.19.0+7f9e863   136.144.52.210   136.144.52.210   Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa)   4.18.0-193.23.1.el8_2.x86_64   cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8
compute-1         Ready    worker   53m   v1.19.0+7f9e863   136.144.52.213   136.144.52.213   Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa)   4.18.0-193.23.1.el8_2.x86_64   cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8
control-plane-0   Ready    master   63m   v1.19.0+7f9e863   136.144.52.211   136.144.52.211   Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa)   4.18.0-193.23.1.el8_2.x86_64   cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8
control-plane-1   Ready    master   63m   v1.19.0+7f9e863   136.144.52.214   136.144.52.214   Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa)   4.18.0-193.23.1.el8_2.x86_64   cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8
control-plane-2   Ready    master   64m   v1.19.0+7f9e863   136.144.52.196   136.144.52.196   Red Hat Enterprise Linux CoreOS 46.82.202009182140-0 (Ootpa)   4.18.0-193.23.1.el8_2.x86_64   cri-o://1.19.0-18.rhaos4.6.gitd802e19.el8


Actual Result:
The pods used the node IP, not egress IP

02:02:25.555597       1 egressip.go:194] Unable to add pod: test/hello-pod to EgressIP: egressip, err: unable to create logical router policy for status: {compute-0 136.144.52.215}, err: unable to retrieve node's: compute-0 gateway IP, err: timed out waiting for the condition

Expected Result:
EgressIp works well.

Comment 5 Alexander Constantinescu 2020-09-24 13:06:26 UTC
PR: https://github.com/openshift/ovn-kubernetes/pull/279 merged last night, it contains the egress IP fixes. So moving to MODIFIED.

Comment 10 errata-xmlrpc 2020-10-27 16:43:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.