Bug 2038732 - Auto egressIP for OVN cluster on GCP: podSelector in egressIP configuration does not take effect after egessIP object is created
Summary: Auto egressIP for OVN cluster on GCP: podSelector in egressIP configuration d...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.10.0
Assignee: Alexander Constantinescu
QA Contact: jechen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-09 22:17 UTC by jechen
Modified: 2022-06-15 18:03 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-01-26 14:56:08 UTC
Target Upstream Version:
Embargoed:
jechen: needinfo-
jechen: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-network-operator pull 1285 0 None Merged Bug 2038732: Add egress* patch credentials for ovnkube-master 2022-01-26 03:32:28 UTC
Github openshift ovn-kubernetes pull 917 0 None Merged Bug 2039099: EgressIP fixes for 4.10 2022-01-26 03:32:27 UTC

Description jechen 2022-01-09 22:17:02 UTC
Description of problem:
on OVN-Kubernetes GCP cluster, podSelector did not take effect after egressIP object was created with podSelector specified 

Version-Release number of selected component (if applicable):
$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2022-01-08-215919   True        False         10m     Cluster version is 4.10.0-0.nightly-2022-01-08-215919


How reproducible:
$ oc get node
NAME                                                        STATUS   ROLES    AGE   VERSION
jechen-0109d-zf4bq-master-0.c.openshift-qe.internal         Ready    master   28m   v1.22.1+6859754
jechen-0109d-zf4bq-master-1.c.openshift-qe.internal         Ready    master   28m   v1.22.1+6859754
jechen-0109d-zf4bq-master-2.c.openshift-qe.internal         Ready    master   28m   v1.22.1+6859754
jechen-0109d-zf4bq-worker-a-8v98c.c.openshift-qe.internal   Ready    worker   18m   v1.22.1+6859754
jechen-0109d-zf4bq-worker-b-t2x66.c.openshift-qe.internal   Ready    worker   18m   v1.22.1+6859754


Steps to Reproduce:
1. label two nodes to be egressip-assignable
$ oc label node jechen-0109d-zf4bq-worker-a-8v98c.c.openshift-qe.internal "k8s.ovn.org/egress-assignable"=""
node/jechen-0109d-zf4bq-worker-a-8v98c.c.openshift-qe.internal labeled


$ oc label node jechen-0109d-zf4bq-worker-b-t2x66.c.openshift-qe.internal  "k8s.ovn.org/egress-assignable"=""
node/jechen-0109d-zf4bq-worker-b-t2x66.c.openshift-qe.internal labeled

2. create two different egressip objects with same namespaceSelector but different podSelector

$ cat config_egressip_ovn_ns_qe_podSelector_red.yaml
apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
  name: egressip1
spec:
  egressIPs:
  - 10.0.128.101
  - 10.0.128.102
  podSelector:
    matchLabels:
    team: red 
  namespaceSelector:
    matchLabels:
      department: qe 


$ cat config_egressip_ovn_ns_qe_podSelector_blue.yaml
apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
  name: egressip2
spec:
  egressIPs:
  - 10.0.128.201
  - 10.0.128.202
  podSelector:
    matchLabels:
    team: blue
  namespaceSelector:
    matchLabels:
      department: qe 

$ oc create -f ./SDN-1332-test/config_egressip_ovn_ns_qe_podSelector_red.yaml
egressip.k8s.ovn.org/egressip1 created

$ oc create -f ./SDN-1332-test/config_egressip_ovn_ns_qe_podSelector_blue.yaml
egressip.k8s.ovn.org/egressip2 created

$ oc get egressip
NAME        EGRESSIPS      ASSIGNED NODE                                               ASSIGNED EGRESSIPS
egressip1   10.0.128.101   jechen-0109d-zf4bq-worker-a-8v98c.c.openshift-qe.internal   10.0.128.101
egressip2   10.0.128.201   jechen-0109d-zf4bq-worker-a-8v98c.c.openshift-qe.internal   10.0.128.201

$ oc get egressip -oyaml
apiVersion: v1
items:
- apiVersion: k8s.ovn.org/v1
  kind: EgressIP
  metadata:
    creationTimestamp: "2022-01-09T21:36:03Z"
    generation: 3
    name: egressip1
    resourceVersion: "31246"
    uid: ef69fe47-527b-4946-922a-c74526371c74
  spec:
    egressIPs:
    - 10.0.128.101
    - 10.0.128.102
    namespaceSelector:
      matchLabels:
        department: qe
    podSelector: {}
  status:
    items:
    - egressIP: 10.0.128.101
      node: jechen-0109d-zf4bq-worker-a-8v98c.c.openshift-qe.internal
    - egressIP: 10.0.128.102
      node: jechen-0109d-zf4bq-worker-b-t2x66.c.openshift-qe.internal
- apiVersion: k8s.ovn.org/v1
  kind: EgressIP
  metadata:
    creationTimestamp: "2022-01-09T21:36:21Z"
    generation: 3
    name: egressip2
    resourceVersion: "31369"
    uid: 8314d7d2-1c69-40d3-bffe-a65979b5a2cd
  spec:
    egressIPs:
    - 10.0.128.201
    - 10.0.128.202
    namespaceSelector:
      matchLabels:
        department: qe
    podSelector: {}
  status:
    items:
    - egressIP: 10.0.128.201
      node: jechen-0109d-zf4bq-worker-a-8v98c.c.openshift-qe.internal
    - egressIP: 10.0.128.202
      node: jechen-0109d-zf4bq-worker-b-t2x66.c.openshift-qe.internal
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


3.  create two projects, create test pods in them,  label project with namespaceSelector, label one test pod in project with podSelector
$ oc new-project test1

$ oc create ./verification-tests/testdata/networking/list_for_pods.json
replicationcontroller/test-rc created
service/test-service created

$ oc get pod
NAME            READY   STATUS    RESTARTS   AGE
test-rc-62txp   1/1     Running   0          19s
test-rc-rmhk4   1/1     Running   0          19s

$ oc label ns test1 department=qe
namespace/test1 labeled

$ oc label pod test-rc-62txp team=red
pod/test-rc-62txp labeled

$ oc get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE   LABELS
test-rc-62txp   1/1     Running   0          47s   name=test-pods,team=red
test-rc-rmhk4   1/1     Running   0          47s   name=test-pods

$ oc new-project test2

$ oc create -f ./verification-tests/testdata/networking/list_for_pods.json
replicationcontroller/test-rc created

$ oc get pod
NAME            READY   STATUS    RESTARTS   AGE
test-rc-m7phf   1/1     Running   0          21s
test-rc-w2rj5   1/1     Running   0          21s

$ oc label ns test2 department=qe
namespace/test2 labeled

$ oc label pod test-rc-m7phf team=blue
pod/test-rc-m7phf labeled

$ oc get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE   LABELS
test-rc-m7phf   1/1     Running   0          59s   name=test-pods,team=blue
test-rc-w2rj5   1/1     Running   0          59s   name=test-pods


4. curl external from pod of each project

$ oc project test1

$ oc get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE     LABELS
test-rc-62txp   1/1     Running   0          6m13s   name=test-pods,team=red
test-rc-rmhk4   1/1     Running   0          6m13s   name=test-pods

$ oc rsh test-rc-62txp
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.102~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.102~ $ 

$ oc project test2

$ oc get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE     LABELS
test-rc-m7phf   1/1     Running   0          6m43s   name=test-pods,team=blue
test-rc-w2rj5   1/1     Running   0          6m43s   name=test-pods

$ oc rsh test-rc-m7phf
~ $ curl 10.0.0.2:8888
10.0.128.102~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.102~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.102~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 


Actual results:
They use same egressIP address from egressip1, podSelector is shown {} in egressip1 and egressip2 objects

Expected results:
pod from second project should use egressIP from egressip2, actual podSelector should be displayed in egressip1 and egressip2

Additional info:

Comment 5 zhaozhanqi 2022-01-25 03:28:52 UTC
@jechen assign this bug to you for verification this bug, thanks

Comment 6 jechen 2022-01-26 01:03:59 UTC
podSelector in egressip1 and egressip2 objects still have {}


[jechen@jechen ~]$ oc get egressip -oyaml
apiVersion: v1
items:
- apiVersion: k8s.ovn.org/v1
  kind: EgressIP
  metadata:
    creationTimestamp: "2022-01-26T00:04:39Z"
    generation: 3
    name: egressip1
    resourceVersion: "59483"
    uid: edf8efb3-d896-497d-b915-674a95a89fee
  spec:
    egressIPs:
    - 10.0.128.101
    - 10.0.128.102
    namespaceSelector:
      matchLabels:
        department: qe
    podSelector: {}                 <-------------------------------------------- did not display actual podSelector value
  status:
    items:
    - egressIP: 10.0.128.102
      node: jechen-0125b-qfrcn-worker-b-rgv8t.c.openshift-qe.internal
    - egressIP: 10.0.128.101
      node: jechen-0125b-qfrcn-worker-a-5pjsk.c.openshift-qe.internal
- apiVersion: k8s.ovn.org/v1
  kind: EgressIP
  metadata:
    creationTimestamp: "2022-01-26T00:04:49Z"
    generation: 3
    name: egressip2
    resourceVersion: "59552"
    uid: 58118466-62e4-4f8e-87d7-e995db2aad37
  spec:
    egressIPs:
    - 10.0.128.201
    - 10.0.128.202
    namespaceSelector:
      matchLabels:
        department: qe
    podSelector: {}                                           <-------------------------------------------- did not display actual podSelector value
  status:
    items:
    - egressIP: 10.0.128.202
      node: jechen-0125b-qfrcn-worker-b-rgv8t.c.openshift-qe.internal
    - egressIP: 10.0.128.201
      node: jechen-0125b-qfrcn-worker-a-5pjsk.c.openshift-qe.internal
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""





$ oc project test1
Now using project "test1" on server "https://api.jechen-0125b.qe.gcp.devcluster.openshift.com:6443".

$ oc get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE   LABELS
test-rc-58mjb   1/1     Running   0          50m   name=test-pods,team=red
test-rc-hjzfq   1/1     Running   0          50m   name=test-pods

$ oc rsh test-rc-58mjb
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.202~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.202~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.202~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.202~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ exit

expect to see 10.0.128.101 or 10.0.128.102 being returned as they are egressIPs in egressip1 object
 
$ oc project test2
Now using project "test2" on server "https://api.jechen-0125b.qe.gcp.devcluster.openshift.com:6443".

$ oc get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE   LABELS
test-rc-6sm46   1/1     Running   0          50m   name=test-pods,team=blue
test-rc-tqnrm   1/1     Running   0          50m   name=test-pods

$ oc rsh test-rc-6sm46
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.102~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.102~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.101~ $ 
~ $ curl 10.0.0.2:8888
10.0.128.102~ $ 

expect to see 10.0.128.201 or 10.0.128.202 being returned as they are egressIPs in egressip2 object

Comment 7 jechen 2022-01-26 01:07:43 UTC
rejecting the fix, change the state back to assigned

Comment 8 Alexander Constantinescu 2022-01-26 09:42:17 UTC
Most things described in #comment 6 are incorrect. 

The podSelectors being empty means that you didn't specify any. Both of those EgressIP objects you've defined match only on one thing: all namespaces with the label "department: qe". Moreover, what you've done is have both EgressIP objects match on the same namespace label, both match "department: qe". The problem with that is that this behavior is undefined and considered a user error. All your pods are matching both EgressIP objects and are hence expected to have the egress IPs (10.0.128.101, 10.0.128.102) or (10.0.128.201, 10.0.128.202)...this is also the case. 

Please see https://bugzilla.redhat.com/show_bug.cgi?id=2034477#c17 for an explanation on another bug where Huiran made the same mistake. 

@Jean: please let me know why you think this is a bug?


Note You need to log in before you can comment on or make changes to this bug.