Hide Forgot
+++ This bug was initially created as a clone of Bug #1886749 +++ Description of problem: Creating a NetworkPolicy which have no selectors, which deny all the traffic on the specified namespace, and removing it afterwards will leave loadbalancer listener in offline state. Version-Release number of selected component (if applicable): This issue only applies for the Octavia with Amphora. How reproducible: Steps to Reproduce: 1. kubectl create namespace foo 2. kubectl run --image kuryr/demo -n foo server 3. kubectl expose pod/server -n foo --port 80 --target-port 8080 4. kubectl run --image kuryr/demo -n foo client 5. kubectl exec -ti -n foo client -- curl <server-pod-ip> (should display: server: HELLO! I AM ALIVE!!!) 6. cat > policy_foo_deny_all.yaml << NIL apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all namespace: foo spec: podSelector: {} policyTypes: - Ingress NIL kubectl apply -f policy_foo_deny_all.yaml 7. kubectl exec -ti -n foo client -- curl <server-pod-ip> (should display: curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused) 8. kubectl delete -n foo networkpolicies deny-all 9. kubectl exec -ti -n foo client -- curl <server-pod-ip> (should display: server: HELLO! I AM ALIVE!!!, but it is not!) Actual results: kubectl exec -ti -n foo client -- curl <server-pod-ip> curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused Expected results: kubectl exec -ti -n foo client -- curl <server-pod-ip> server: HELLO! I AM ALIVE!!! Additional info:
Verified on OCP4.6.0-0.nightly-2020-12-08-021151 over OSP13 with Amphoras (2020-11-13.1) $ oc new-project foo Now using project "foo" on server "https://api.ostest.shiftstack.com:6443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app rails-postgresql-example to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname $ oc run --image kuryr/demo -n foo server pod/server created $ oc run --image kuryr/demo -n foo client pod/client created $ oc expose pod/server -n foo --port 80 --target-port 8080 service/server exposed $ oc get all NAME READY STATUS RESTARTS AGE pod/client 0/1 ContainerCreating 0 13s pod/server 0/1 ContainerCreating 0 18s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/server ClusterIP 172.30.20.239 <none> 80/TCP 8s $ oc exec -ti -n foo client -- curl 172.30.20.239 server: HELLO! I AM ALIVE!!! $ cat policy_foo_deny_all.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all namespace: foo spec: podSelector: {} policyTypes: - Ingress $ oc apply -f policy_foo_deny_all.yaml networkpolicy.networking.k8s.io/deny-all created $ oc exec -ti -n foo client -- curl 172.30.20.239 curl: (7) Failed to connect to 172.30.20.239 port 80: Connection refused command terminated with exit code 7 $ oc delete -n foo networkpolicies deny-all networkpolicy.networking.k8s.io "deny-all" deleted $ oc exec -ti -n foo client -- curl 172.30.20.239 server: HELLO! I AM ALIVE!!!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.6.8 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5259