Bug 1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes
Summary: [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differe...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.8.0
Assignee: rdobosz
QA Contact: Jon Uriarte
URL:
Whiteboard:
Depends On:
Blocks: 1930017 1941941
TreeView+ depends on / blocked
 
Reported: 2021-01-28 18:43 UTC by Caden Marchese
Modified: 2024-03-25 18:02 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1930017 (view as bug list)
Environment:
Last Closed: 2021-07-27 22:37:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 459 0 None open Bug 1921878: Narrow connection to the cluster only on namespaceSelector 2021-02-17 08:08:57 UTC
Github openshift kuryr-kubernetes pull 498 0 None open Bug 1921878: Include service subnet to be open for namespaceSelector set to all. 2021-04-14 05:14:53 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:37:37 UTC

Description Caden Marchese 2021-01-28 18:43:15 UTC
Description of problem:
Using the following NetworkPolicy in Kuryr allows egress everywhere (0.0.0.0/0), and the same policy when applied in OVN-Kubrnetes will only allow egress to only other cluster components.

Version-Release number of selected component (if applicable):
4.6

Steps to reproduce:

1. Create following NetworkPolicy on a Kuryr enabled cluster:

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: networkpolicy-example
spec:
  podSelector: {}
  policyTypes:
  - Egress
  - Ingress
  egress:
  - to:
    - namespaceSelector: {}

Expected results:

Egress should be allowed to all namespaces in the cluster (all of pod and services networks). Egress outside of the cluster should not be allowed. This is the behavior of the above NetworkPolicy when using OVNKubernetes instead of Kuryr.

Actual results:

The namespaceSelector: {} is mapped to 0.0.0.0/0 and ::0/0 rules, which means egress wide open.

$ oc get kuryrnetworkpolicy networkpolicy-example -o yaml

(...)
status:
  podSelector: {}
  securityGroupId: 4b96db9a-bcf9-4d40-98ff-4e75bb9b5a58
  securityGroupRules:
  - description: Kuryr-Kubernetes NetPolicy SG rule
    direction: ingress
    ethertype: IPv4
    id: c0132a10-3180-46b3-9144-d11b12674114
    remote_ip_prefix: 172.40.0.0/16
    security_group_id: 4b96db9a-bcf9-4d40-98ff-4e75bb9b5a58
  - description: Kuryr-Kubernetes NetPolicy SG rule
    direction: egress
    ethertype: IPv4
    id: 7bf863ce-09ec-4008-a3c3-8cb0f70a11e9
    port_range_max: 65535
    port_range_min: 1
    protocol: tcp
    security_group_id: 4b96db9a-bcf9-4d40-98ff-4e75bb9b5a58

Comment 2 rlobillo 2021-02-22 10:59:24 UTC
Verified on OCP4.8.0-0.nightly-2021-02-21-102854 on OSP13(2021-01-20.1) with Amphora provider.

SG rules generated by below NP resource definition allow traffic to other namespaces but not
to the outside:

$ cat np_bz1921878.yaml 
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: np-bz1921878
spec:
  podSelector:
    matchLabels:
      run: demo
  policyTypes:
  - Egress
  - Ingress
  ingress:
  - from:
    - podSelector: {}
  egress:
  - to:
    - namespaceSelector: {}

Steps:

1. Create test and test2 projects both with kuryr/demo pod exposed by a service on port 80:

$ oc new-project test
$ oc run --image kuryr/demo demo
$ oc expose pod/demo --port 80 --target-port 8080

$ oc get all -n test
NAME       READY   STATUS    RESTARTS   AGE
pod/demo   1/1     Running   0          40m

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/demo   ClusterIP   172.30.138.91   <none>        80/TCP    40m

$ oc new-project test2
$ oc run --image kuryr/demo demo2
$ oc expose pod/demo2 --port 80 --target-port 8080

$ oc get all -n test2
NAME        READY   STATUS    RESTARTS   AGE
pod/demo2   1/1     Running   0          3m

NAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/demo2   ClusterIP   172.30.4.47   <none>        80/TCP    2m39s


2. Apply np on demo pod in test project:

$ cat np_bz1921878.yaml 
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: np-bz1921878
spec:
  podSelector:
    matchLabels:
      run: demo
  policyTypes:
  - Egress
  - Ingress
  ingress:
  - from:
    - podSelector: {}
  egress:
  - to:
    - namespaceSelector: {}

$ oc apply -f np_bz1921878.yaml 
networkpolicy.networking.k8s.io/np-bz1921878 created

# knp resource is created and no egress rule to 0.0.0.0/0
is created:

$ oc get knp/np-bz1921878 -o json | jq .spec
{
  "egressSgRules": [
    {
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "egress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "10.128.0.0/14"
      }
    },
    {
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "egress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "172.30.0.0/15"
      }
    }
  ],
  "ingressSgRules": [
    {
      "namespace": "test",
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "ingress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "10.128.124.0/23"
      }
    },
    {
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "ingress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "172.30.0.0/15"
      }
    },
    {
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "ingress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "10.196.0.0/16"
      }
    }
  ],
  "podSelector": {
    "matchLabels": {
      "run": "demo"
    }
  },
  "policyTypes": [
    "Egress",
    "Ingress"
  ]
}

# Connectivity tests (pods in other namespace are reachable, outside
access is not):

$ oc rsh -n test pod/demo

~ $ curl 172.30.4.47
demo2: HELLO! I AM ALIVE!!!
~ $ curl 10.128.126.175:8080
demo2: HELLO! I AM ALIVE!!!
~ $ curl www.google.com
^C
~ $ ping www.google.com
PING www.google.com (142.250.179.196) 56(84) bytes of data.
^C
--- www.google.com ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5125ms

Furthermore, kuryr-tempest tests, NP tests and conformance tests
passed for this build. Please refer to the attachment on 
https://bugzilla.redhat.com/show_bug.cgi?id=1927244#c6

Comment 3 rlobillo 2021-03-15 14:46:30 UTC
Failed on OSP16.1 (RHOS-16.1-RHEL-8-20201214.n.3) using OVN-Octavia:


$ oc new-project test
$ oc run --image kuryr/demo demo
$ oc expose pod/demo --port 80 --target-port 8080
$ oc new-project test2
$ oc run --image kuryr/demo demo2
$ oc expose pod/demo2 --port 80 --target-port 8080

Loading below manifest on test project:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: np-bz1921878
spec:
  podSelector:
    matchLabels:
      run: demo
  policyTypes:
  - Egress
  - Ingress
  ingress:
  - from:
    - podSelector: {}
  egress:
  - to:
    - namespaceSelector: {}

np shows below:
$ oc get knp/np-bz1921878 -o json | jq .spec
{
  "egressSgRules": [
    {
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "egress",
        "ethertype": "IPv4",
        "port_range_max": 65535,
        "port_range_min": 1,
        "protocol": "tcp",
        "remote_ip_prefix": "10.128.0.0/14"
      }
    }
  ],
  "ingressSgRules": [
    {
      "namespace": "default",
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "ingress",
        "ethertype": "IPv4",
        "port_range_max": 65535,
        "port_range_min": 1,
        "protocol": "tcp",
        "remote_ip_prefix": "10.128.76.0/23"
      }
    },
    {
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "ingress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "10.196.0.0/16"
      }
    }
  ],
  "podSelector": {
    "matchLabels": {
      "run": "demo"
    }
  },
  "policyTypes": [
    "Egress",
    "Ingress"
  ]
}

^ missing the svc network (172.30.0.0/15).

As a consequence, pod on project test cannot reach the service on project test2:

$ oc get all -n test
NAME       READY   STATUS    RESTARTS   AGE
pod/demo   1/1     Running   0          53m
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/demo   ClusterIP   172.30.120.87   <none>        80/TCP    53m
$ oc get all -n test2 -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP              NODE                          NOMINATED NODE   READINESS GATES
pod/demo2   1/1     Running   0          52m   10.128.128.99   ostest-858gf-worker-0-w6psd   <none>           <none>
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/demo2   ClusterIP   172.30.120.252   <none>        80/TCP    52m   run=demo2
$ oc rsh -n test pod/demo
~ $ curl 10.128.128.99:8080
demo2: HELLO! I AM ALIVE!!!
~ $ curl 172.30.120.252
<NOT WORKING>

Comment 5 Jon Uriarte 2021-05-05 15:37:23 UTC
Verified in 4.8.0-0.nightly-2021-04-30-201824 on top of OSP 13.0.15 (2021-03-24.1) with amphora
provider and and on top of OSP 16.1.5 (RHOS-16.1-RHEL-8-20210323.n.0) with ovn octavia provider.

SG rules generated by below NP resource definition allow traffic to other namespaces but not
to the outside:

$ cat np_bz1921878.yaml 
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: np-bz1921878
spec:
  podSelector:
    matchLabels:
      run: demo
  policyTypes:
  - Egress
  - Ingress
  ingress:
  - from:
    - podSelector: {}
  egress:
  - to:
    - namespaceSelector: {}
    

OSP 13.0.15
-----------
Steps:

1. Create test and test2 projects both with kuryr/demo pod exposed by a service on port 80:

$ oc new-project test
$ oc run --image kuryr/demo demo
$ oc expose pod/demo --port 80 --target-port 8080

$ oc -n test get all -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP               NODE                          NOMINATED NODE   READINESS GATES
pod/demo   1/1     Running   0          10m   10.128.127.176   ostest-spxs8-worker-0-t7llx   <none>           <none>

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/demo   ClusterIP   172.30.180.42   <none>        80/TCP    10m   run=demo

$ oc new-project test2
$ oc run --image kuryr/demo demo2
$ oc expose pod/demo2 --port 80 --target-port 8080

$ oc -n test2 get all -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE                          NOMINATED NODE   READINESS GATES
pod/demo2   1/1     Running   0          10m   10.128.128.154   ostest-spxs8-worker-0-t7llx   <none>           <none>

NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/demo2   ClusterIP   172.30.190.90   <none>        80/TCP    10m   run=demo2


2. Apply np on demo pod in test project:

$ oc project test
$ oc apply -f np_bz1921878.yaml
networkpolicy.networking.k8s.io/np-bz1921878 created

# knp resource is created and no egress rule to 0.0.0.0/0
is created:

$ oc get knp/np-bz1921878 -o json | jq .spec
{                             
  "policyTypes": [                                         
    "Egress",
    "Ingress"
  ],
  "podSelector": {
    "matchLabels": {   
      "run": "demo"
    }
  },
  "ingressSgRules": [
    {
      "sgRule": {
        "remote_ip_prefix": "10.128.126.0/23",
        "ethertype": "IPv4",
        "direction": "ingress",
        "description": "Kuryr-Kubernetes NetPolicy SG rule"
      },
      "namespace": "test"
    },
    {
      "sgRule": {
        "remote_ip_prefix": "172.30.0.0/15",
        "ethertype": "IPv4",
        "direction": "ingress",
        "description": "Kuryr-Kubernetes NetPolicy SG rule"
      }
    },
    {
      "sgRule": {
        "remote_ip_prefix": "10.196.0.0/16",
        "ethertype": "IPv4",
        "direction": "ingress",
        "description": "Kuryr-Kubernetes NetPolicy SG rule"
      }
    }
  ],
  "egressSgRules": [
    {
      "sgRule": {
        "remote_ip_prefix": "10.128.0.0/14",
        "ethertype": "IPv4",
        "direction": "egress",
        "description": "Kuryr-Kubernetes NetPolicy SG rule"
      }
    },
    {
      "sgRule": {
        "remote_ip_prefix": "172.30.0.0/15",
        "ethertype": "IPv4",
        "direction": "egress",
        "description": "Kuryr-Kubernetes NetPolicy SG rule"
      }
    }
  ]
}

# Connectivity tests (pods in other namespace are reachable, outside
access is not):

$ oc rsh -n test pod/demo
~ $ curl 172.30.190.90
demo2: HELLO! I AM ALIVE!!!
~ $ curl 10.128.128.154:8080
demo2: HELLO! I AM ALIVE!!!
~ $ curl www.google.com
^C
~ $ ping www.google.com
PING www.google.com (142.250.179.164) 56(84) bytes of data.
^C
--- www.google.com ping statistics ---
80 packets transmitted, 0 received, 100% packet loss, time 80916ms

Kubernetes NP tests passed as well.


OSP 16.1.5
----------
Steps:

1. Create test and test2 projects both with kuryr/demo pod exposed by a service on port 80:

$ oc new-project test
$ oc run --image kuryr/demo demo
$ oc expose pod/demo --port 80 --target-port 8080

$ oc -n test get all -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP              NODE                          NOMINATED NODE   READINESS GATES
pod/demo   1/1     Running   0          52s   10.128.125.37   ostest-6jqjs-worker-0-lbvpx   <none>           <none>

NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/demo   ClusterIP   172.30.200.242   <none>        80/TCP    18s   run=demo

$ oc new-project test2
$ oc run --image kuryr/demo demo2
$ oc expose pod/demo2 --port 80 --target-port 8080

$ oc -n test2 get all -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE                          NOMINATED NODE   READINESS GATES
pod/demo2   1/1     Running   0          38s   10.128.126.174   ostest-6jqjs-worker-0-dbpxw   <none>           <none>

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/demo2   ClusterIP   172.30.206.161   <none>        80/TCP    35s   run=demo2


2. Apply np on demo pod in test project:

$ oc project test
$ oc apply -f np_bz1921878.yaml
networkpolicy.networking.k8s.io/np-bz1921878 created

# knp resource is created and no egress rule to 0.0.0.0/0
is created:

$ oc get knp/np-bz1921878 -o json | jq .spec
{
  "egressSgRules": [
    {
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "egress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "10.128.0.0/14"
      }
    },
    {
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "egress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "172.30.0.0/15"
      }
    }
  ],
  "ingressSgRules": [
    {
      "namespace": "test",
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "ingress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "10.128.124.0/23"
      }
    },
    {
      "sgRule": {
        "description": "Kuryr-Kubernetes NetPolicy SG rule",
        "direction": "ingress",
        "ethertype": "IPv4",
        "remote_ip_prefix": "10.196.0.0/16"
      }
    }
  ],
  "podSelector": {
    "matchLabels": {
      "run": "demo"
    }
  },
  "policyTypes": [
    "Egress",
    "Ingress"
  ]
}


# Connectivity tests (pods in other namespace are reachable, outside
access is not):

$ oc rsh -n test pod/demo
~ $ curl 172.30.206.161
demo2: HELLO! I AM ALIVE!!!
~ $ curl 10.128.126.174:8080
demo2: HELLO! I AM ALIVE!!!
~ $ curl www.google.com
^C
~ $ ping www.google.com
PING www.google.com (142.250.179.164) 56(84) bytes of data.
^C
--- www.google.com ping statistics ---
80 packets transmitted, 0 received, 100% packet loss, time 80916ms

Kubernetes NP tests passed as well.

Comment 8 errata-xmlrpc 2021-07-27 22:37:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.