Bug 2062084 - [openshift-sdn] pods cannot be isolated when podselector is not matched for networkpolicy
Summary: [openshift-sdn] pods cannot be isolated when podselector is not matched for n...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Riccardo Ravaioli
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-09 06:41 UTC by zhaozhanqi
Modified: 2022-03-09 13:57 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-09 13:57:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description zhaozhanqi 2022-03-09 06:41:10 UTC
Description of problem:
pods with unmatched label still can be accessed pod in networkpolicy 

eg. 

1. we have two namespace z1 and z2 with different label, like below. 

# oc get pod --show-labels -n z1 -o wide
NAME              READY   STATUS    RESTARTS   AGE    IP            NODE                                         NOMINATED NODE   READINESS GATES   LABELS
hello-sdn-4hdcm   1/1     Running   0          3h1m   10.128.2.39   ip-10-0-130-194.us-east-2.compute.internal   <none>           <none>            name=hellosdn
hello-sdn-lkqn2   1/1     Running   0          3h1m   10.131.0.13   ip-10-0-203-92.us-east-2.compute.internal    <none>           <none>            name=hellosdn
test-rc-cffm9     1/1     Running   0          3h1m   10.128.2.38   ip-10-0-130-194.us-east-2.compute.internal   <none>           <none>            name=test-pods
test-rc-hvn55     1/1     Running   0          3h1m   10.129.2.15   ip-10-0-168-211.us-east-2.compute.internal   <none>           <none>            name=test-pods

# oc get pod --show-labels -n z2 -o wide
NAME              READY   STATUS    RESTARTS   AGE    IP            NODE                                         NOMINATED NODE   READINESS GATES   LABELS
hello-sdn-bct5p   1/1     Running   0          173m   10.131.0.14   ip-10-0-203-92.us-east-2.compute.internal    <none>           <none>            name=hellosdn
hello-sdn-w8lms   1/1     Running   0          173m   10.129.2.19   ip-10-0-168-211.us-east-2.compute.internal   <none>           <none>            name=hellosdn
test-rc-tdfv6     1/1     Running   0          173m   10.129.2.18   ip-10-0-168-211.us-east-2.compute.internal   <none>           <none>            name=test-pods
test-rc-zqqcj     1/1     Running   0          173m   10.128.2.44   ip-10-0-130-194.us-east-2.compute.internal   <none>           <none>            name=test-pods


# oc get namespace z1 z2 --show-labels 
NAME   STATUS   AGE     LABELS
z1     Active   3h12m   kubernetes.io/metadata.name=z1
z2     Active   3h3m    kubernetes.io/metadata.name=z2,team=operations

2. Now I created one networkpolicy in z1 namespace as blow to let only namespace with 'name=operators' and pods with 'test-pods' label can access z1 ns pods with 'name=hellosdn'

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-all-ingress
spec:
  podSelector:
    matchLabels:
      name: hellosdn
  ingress:
    - from:
      - namespaceSelector:
          matchLabels:
            team: operations
      - podSelector:
          matchLabels:
            name: test-pods
  policyTypes:
    - Ingress

3. After apply above policy, all pods in z2 can access pods in z1 even though 'name=hellosdn'

#  oc exec -n z2 test-rc-tdfv6 -- curl 10.128.2.39:8080 2>/dev/null
Hello OpenShift!

# oc exec -n z2 hello-sdn-bct5p -- curl 10.128.2.39:8080 2>/dev/null
Hello OpenShift!




Version-Release number of selected component (if applicable):
4.9.23

How reproducible:
always. 

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

hello-sdn pods in namespace z2 should not be able to access z1 hellosdn pod

#oc exec -n z2 hello-sdn-bct5p -- curl 10.128.2.39:8080 2>/dev/null

Additional info:

ovn-kuberentes is working well
I guess this issue not only 4.9 version but for all

Comment 2 Riccardo Ravaioli 2022-03-09 12:07:15 UTC
Hi,

So according to the documentation (https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors), this seems to be the desired behaviour: selectors that are preceded by a dash sign are all OR'ed.

Back to your example, the network policy allows incoming traffic (ingress) on pods with label name:hellosdn from:
- namespaces with label team:operations (so z2 in your example), *or*
- pods with label name:test-pods

It's effectively a union of the selectors in the "from" section, meaning that we are selecting all pods in z2 and also pods in z1 with label name:test-pods.

If instead you wanted an intersection of the conditions specified by the selectors (e.g. namespace z2 AND pods with label name:test-pods), you should remove the dash sign in front of podSelector:

  ingress:
    - from:
      - namespaceSelector:
          matchLabels:
            team: operations
        podSelector:
          matchLabels:
            name: test-pods

Comment 3 zhaozhanqi 2022-03-09 13:57:27 UTC
Oh, yes, Thanks for catch this typo issue.  after remove dash, it works well

close this bug.


Note You need to log in before you can comment on or make changes to this bug.