Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be unavailable on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer.
Summary: Removing network policy from namespace causes inability to access pods throug...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.7.0
Assignee: rdobosz
QA Contact: GenadiC
URL:
Whiteboard:
Depends On:
Blocks: 1892270
TreeView+ depends on / blocked
 
Reported: 2020-10-09 10:23 UTC by rdobosz
Modified: 2021-02-24 15:24 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:24:26 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 384 0 None closed Bug 1886749: Removing network policy from namespace causes inability to access pods through loadbalancer. 2021-01-13 07:44:14 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:24:54 UTC

Description rdobosz 2020-10-09 10:23:30 UTC
Description of problem:

Creating a NetworkPolicy which have no selectors, which deny all the traffic on the specified namespace, and removing it afterwards will leave loadbalancer listener in offline state.

Version-Release number of selected component (if applicable):

This issue only applies for the Octavia with Amphora.

How reproducible:

Steps to Reproduce:

1. kubectl create namespace foo
2. kubectl run --image kuryr/demo -n foo server
3. kubectl expose pod/server -n foo --port 80 --target-port 8080
4. kubectl run --image kuryr/demo -n foo client 
5. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!)
6. cat > policy_foo_deny_all.yaml << NIL
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: foo
spec:
  podSelector: {}
  policyTypes:
    - Ingress
NIL
kubectl apply -f policy_foo_deny_all.yaml
7. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused)
8. kubectl delete -n foo networkpolicies deny-all
9. kubectl exec -ti -n foo client -- curl <server-pod-ip>
(should display: server: HELLO! I AM ALIVE!!!, but it is not!)


Actual results:

kubectl exec -ti -n foo client -- curl <server-pod-ip>
curl: (7) Failed to connect to <server-pod-ip> port 80: Connection refused



Expected results:

kubectl exec -ti -n foo client -- curl <server-pod-ip>
server: HELLO! I AM ALIVE!!!


Additional info:

Comment 3 rlobillo 2020-11-10 10:57:13 UTC
Verified on OCP4.7.0-0.nightly-2020-11-10-032055 over OSP13+Amphoras (2020-10-06.2)

Reproduction steps behave as expected:

(shiftstack) [stack@undercloud-0 ~]$ oc run --image kuryr/demo -n foo server
pod/server created

(shiftstack) [stack@undercloud-0 ~]$ oc run --image kuryr/demo -n foo client
pod/client created

(shiftstack) [stack@undercloud-0 ~]$ oc expose pod/server -n foo --port 80 --target-port 8080
service/server exposed

(shiftstack) [stack@undercloud-0 ~]$ oc get all
NAME         READY   STATUS    RESTARTS   AGE
pod/client   1/1     Running   0          7m55s
pod/server   1/1     Running   0          8m

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/server   ClusterIP   172.30.30.173   <none>        80/TCP    7m40s

(shiftstack) [stack@undercloud-0 ~]$ oc exec -ti -n foo client -- curl 172.30.30.173
server: HELLO! I AM ALIVE!!!

(shiftstack) [stack@undercloud-0 ~]$ cat policy_foo_deny_all.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: foo
spec:
  podSelector: {}
  policyTypes:
    - Ingress

(shiftstack) [stack@undercloud-0 ~]$ oc apply -f policy_foo_deny_all.yaml
networkpolicy.networking.k8s.io/deny-all created

(shiftstack) [stack@undercloud-0 ~]$ oc exec -ti -n foo client -- curl 172.30.30.173
curl: (7) Failed to connect to 172.30.30.173 port 80: Connection refused
command terminated with exit code 7

(shiftstack) [stack@undercloud-0 ~]$ oc delete -n foo networkpolicies deny-all
networkpolicy.networking.k8s.io "deny-all" deleted

(shiftstack) [stack@undercloud-0 ~]$ oc exec -ti -n foo client -- curl 172.30.30.173
server: HELLO! I AM ALIVE!!!

Comment 6 errata-xmlrpc 2021-02-24 15:24:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.