Bug 1892270

Summary: Removing network policy from namespace causes inability to access pods through loadbalancer.
Product: OpenShift Container Platform Reporter: OpenShift BugZilla Robot <openshift-bugzilla-robot>
Component: NetworkingAssignee: rdobosz
Networking sub component: kuryr QA Contact: GenadiC <gcheresh>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: high CC: ltomasbo, rlobillo
Version: 4.6   
Target Milestone: ---   
Target Release: 4.6.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-14 13:50:25 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1886749    
Bug Blocks:    

Description OpenShift BugZilla Robot 2020-10-28 10:59:05 UTC
+++ This bug was initially created as a clone of Bug #1886749 +++

Description of problem:

Creating a NetworkPolicy which have no selectors, which deny all the traffic on the specified namespace, and removing it afterwards will leave loadbalancer listener in offline state.

Version-Release number of selected component (if applicable):

This issue only applies for the Octavia with Amphora.

How reproducible:

Steps to Reproduce:

1. kubectl create namespace foo
2. kubectl run --image kuryr/demo -n foo server
3. kubectl expose pod/server -n foo --port 80 --target-port 8080
4. kubectl run --image kuryr/demo -n foo client 
5. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!)
6. cat > policy_foo_deny_all.yaml << NIL
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: foo
spec:
  podSelector: {}
  policyTypes:
    - Ingress
NIL
kubectl apply -f policy_foo_deny_all.yaml
7. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused)
8. kubectl delete -n foo networkpolicies deny-all
9. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!, but it is not!)


Actual results:

kubectl exec -ti -n foo client -- curl <server-pod-ip>
curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused



Expected results:

kubectl exec -ti -n foo client -- curl <server-pod-ip>
server: HELLO! I AM ALIVE!!!


Additional info:

Comment 3 rlobillo 2020-12-09 10:34:41 UTC
Verified on OCP4.6.0-0.nightly-2020-12-08-021151 over OSP13 with Amphoras (2020-11-13.1)

$ oc new-project foo
Now using project "foo" on server "https://api.ostest.shiftstack.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

$ oc run --image kuryr/demo -n foo server
pod/server created

$ oc run --image kuryr/demo -n foo client
pod/client created

$ oc expose pod/server -n foo --port 80 --target-port 8080
service/server exposed

$ oc get all
NAME         READY   STATUS              RESTARTS   AGE
pod/client   0/1     ContainerCreating   0          13s
pod/server   0/1     ContainerCreating   0          18s

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/server   ClusterIP   172.30.20.239   <none>        80/TCP    8s

$ oc exec -ti -n foo client -- curl 172.30.20.239
server: HELLO! I AM ALIVE!!!

$ cat policy_foo_deny_all.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: foo
spec:
  podSelector: {}
  policyTypes:
    - Ingress

$ oc apply -f policy_foo_deny_all.yaml
networkpolicy.networking.k8s.io/deny-all created

$ oc exec -ti -n foo client -- curl 172.30.20.239                                                                                                           
curl: (7) Failed to connect to 172.30.20.239 port 80: Connection refused
command terminated with exit code 7

$ oc delete -n foo networkpolicies deny-all
networkpolicy.networking.k8s.io "deny-all" deleted

$ oc exec -ti -n foo client -- curl 172.30.20.239
server: HELLO! I AM ALIVE!!!

Comment 5 errata-xmlrpc 2020-12-14 13:50:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.6.8 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5259