Bug 1886749

Summary: Removing network policy from namespace causes inability to access pods through loadbalancer.
Product: OpenShift Container Platform Reporter: rdobosz
Component: NetworkingAssignee: rdobosz
Networking sub component: kuryr QA Contact: GenadiC <gcheresh>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: high CC: ltomasbo, rlobillo
Version: 4.6   
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-02-24 15:24:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1892270    

Description rdobosz 2020-10-09 10:23:30 UTC
Description of problem:

Creating a NetworkPolicy which have no selectors, which deny all the traffic on the specified namespace, and removing it afterwards will leave loadbalancer listener in offline state.

Version-Release number of selected component (if applicable):

This issue only applies for the Octavia with Amphora.

How reproducible:

Steps to Reproduce:

1. kubectl create namespace foo
2. kubectl run --image kuryr/demo -n foo server
3. kubectl expose pod/server -n foo --port 80 --target-port 8080
4. kubectl run --image kuryr/demo -n foo client 
5. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!)
6. cat > policy_foo_deny_all.yaml << NIL
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: foo
spec:
  podSelector: {}
  policyTypes:
    - Ingress
NIL
kubectl apply -f policy_foo_deny_all.yaml
7. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused)
8. kubectl delete -n foo networkpolicies deny-all
9. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!, but it is not!)


Actual results:

kubectl exec -ti -n foo client -- curl <server-pod-ip>
curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused



Expected results:

kubectl exec -ti -n foo client -- curl <server-pod-ip>
server: HELLO! I AM ALIVE!!!


Additional info:

Comment 3 rlobillo 2020-11-10 10:57:13 UTC
Verified on OCP4.7.0-0.nightly-2020-11-10-032055 over OSP13+Amphoras (2020-10-06.2)

Reproduction steps behave as expected:

(shiftstack) [stack@undercloud-0 ~]$ oc run --image kuryr/demo -n foo server
pod/server created

(shiftstack) [stack@undercloud-0 ~]$ oc run --image kuryr/demo -n foo client
pod/client created

(shiftstack) [stack@undercloud-0 ~]$ oc expose pod/server -n foo --port 80 --target-port 8080
service/server exposed

(shiftstack) [stack@undercloud-0 ~]$ oc get all
NAME         READY   STATUS    RESTARTS   AGE
pod/client   1/1     Running   0          7m55s
pod/server   1/1     Running   0          8m

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/server   ClusterIP   172.30.30.173   <none>        80/TCP    7m40s

(shiftstack) [stack@undercloud-0 ~]$ oc exec -ti -n foo client -- curl 172.30.30.173
server: HELLO! I AM ALIVE!!!

(shiftstack) [stack@undercloud-0 ~]$ cat policy_foo_deny_all.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: foo
spec:
  podSelector: {}
  policyTypes:
    - Ingress

(shiftstack) [stack@undercloud-0 ~]$ oc apply -f policy_foo_deny_all.yaml
networkpolicy.networking.k8s.io/deny-all created

(shiftstack) [stack@undercloud-0 ~]$ oc exec -ti -n foo client -- curl 172.30.30.173
curl: (7) Failed to connect to 172.30.30.173 port 80: Connection refused
command terminated with exit code 7

(shiftstack) [stack@undercloud-0 ~]$ oc delete -n foo networkpolicies deny-all
networkpolicy.networking.k8s.io "deny-all" deleted

(shiftstack) [stack@undercloud-0 ~]$ oc exec -ti -n foo client -- curl 172.30.30.173
server: HELLO! I AM ALIVE!!!

Comment 6 errata-xmlrpc 2021-02-24 15:24:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633