Bug 1892270 - Removing network policy from namespace causes inability to access pods through loadbalancer.
Summary: Removing network policy from namespace causes inability to access pods throug...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.6.z
Assignee: rdobosz
QA Contact: GenadiC
URL:
Whiteboard:
Depends On: 1886749
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-28 10:59 UTC by OpenShift BugZilla Robot
Modified: 2020-12-14 13:50 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-14 13:50:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 385 0 None closed [release-4.6] Bug 1892270: Removing network policy from namespace causes inability to access pods through loadbalancer. 2021-01-12 18:00:34 UTC
Red Hat Product Errata RHSA-2020:5259 0 None None None 2020-12-14 13:50:39 UTC

Description OpenShift BugZilla Robot 2020-10-28 10:59:05 UTC
+++ This bug was initially created as a clone of Bug #1886749 +++

Description of problem:

Creating a NetworkPolicy which have no selectors, which deny all the traffic on the specified namespace, and removing it afterwards will leave loadbalancer listener in offline state.

Version-Release number of selected component (if applicable):

This issue only applies for the Octavia with Amphora.

How reproducible:

Steps to Reproduce:

1. kubectl create namespace foo
2. kubectl run --image kuryr/demo -n foo server
3. kubectl expose pod/server -n foo --port 80 --target-port 8080
4. kubectl run --image kuryr/demo -n foo client 
5. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!)
6. cat > policy_foo_deny_all.yaml << NIL
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: foo
spec:
  podSelector: {}
  policyTypes:
    - Ingress
NIL
kubectl apply -f policy_foo_deny_all.yaml
7. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused)
8. kubectl delete -n foo networkpolicies deny-all
9. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!, but it is not!)


Actual results:

kubectl exec -ti -n foo client -- curl <server-pod-ip>
curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused



Expected results:

kubectl exec -ti -n foo client -- curl <server-pod-ip>
server: HELLO! I AM ALIVE!!!


Additional info:

Comment 3 rlobillo 2020-12-09 10:34:41 UTC
Verified on OCP4.6.0-0.nightly-2020-12-08-021151 over OSP13 with Amphoras (2020-11-13.1)

$ oc new-project foo
Now using project "foo" on server "https://api.ostest.shiftstack.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

$ oc run --image kuryr/demo -n foo server
pod/server created

$ oc run --image kuryr/demo -n foo client
pod/client created

$ oc expose pod/server -n foo --port 80 --target-port 8080
service/server exposed

$ oc get all
NAME         READY   STATUS              RESTARTS   AGE
pod/client   0/1     ContainerCreating   0          13s
pod/server   0/1     ContainerCreating   0          18s

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/server   ClusterIP   172.30.20.239   <none>        80/TCP    8s

$ oc exec -ti -n foo client -- curl 172.30.20.239
server: HELLO! I AM ALIVE!!!

$ cat policy_foo_deny_all.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: foo
spec:
  podSelector: {}
  policyTypes:
    - Ingress

$ oc apply -f policy_foo_deny_all.yaml
networkpolicy.networking.k8s.io/deny-all created

$ oc exec -ti -n foo client -- curl 172.30.20.239                                                                                                           
curl: (7) Failed to connect to 172.30.20.239 port 80: Connection refused
command terminated with exit code 7

$ oc delete -n foo networkpolicies deny-all
networkpolicy.networking.k8s.io "deny-all" deleted

$ oc exec -ti -n foo client -- curl 172.30.20.239
server: HELLO! I AM ALIVE!!!

Comment 5 errata-xmlrpc 2020-12-14 13:50:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.6.8 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5259


Note You need to log in before you can comment on or make changes to this bug.