Description of problem: If a NetworkPolicy is applied that contains an ingress ipBlock section with cidr range, each node goes into NotReady status. It seems also that core dumps are also generated repeatedly in /var/lib/origin until the disk space is exhausted. Version-Release number of selected component (if applicable): OCP 3.9.25 OCP 3.9.27 How reproducible: Install OpenShift with the networkpolicy plugin. Create a test namespace. Create and apply a networkpolicy such as: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: test namespace: test spec: podSelector: {} ingress: - from: - ipBlock: cidr: 0.0.0.0/0 Steps to Reproduce: 1. Install OpenShift with the openshift-ovs-networkpolicy SDN plugin. 2. Create a test namespace. 3. Create a syntactically-valid NetworkPolicy containing ingress ipBlock section such as the example above. 4. oc create -f <test_network_policy_name> Actual results: All nodes in cluster go to NotReady. Core dumps generated repeatedly. Expected results: Network policy to be applied without effecting node status in any way. Additional info:
while we should not crash, you should be aware that OCP 3.10 does not support NetworkPolicy ipBlocks anyway, so the *expected* result of kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: test spec: podSelector: {} ingress: - from: - ipBlock: cidr: 0.0.0.0/0 is that it will be interpreted the same as: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: test spec: podSelector: {}
new QE tests: 1. A policy that contains both ingress and egress rules behaves exactly the same as it would if you removed the "egress" section entirely. Eg, this: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: test-ignore-egress-1 spec: podSelector: {} ingress: - from: - ports: - port: 80 egress: - to: - ports: - port: 100 allows incoming connections to port 80 on any pod in the namespace, and does not affect egress traffic in any way. 2. A policy with 'policyTypes: ["Egress"]' is ignored. Eg, this policy: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: egress-default-deny spec: policyTypes: ["Egress"] podSelector: {} would have *no effect* on a namespace, and in particular it does not result in a default deny of ingress. 3. A policy with an ingress rule with an "ipBlock" element behaves like it would if that rule was removed. So eg: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: test-podSelector-and-ipBlock spec: podSelector: {} ingress: - from: - ipBlock: cidr: 0.0.0.0/0 - podSelector: matchLabels: type: red Behaves the same as kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: test-podSelector-only spec: podSelector: {} ingress: - from: - podSelector: matchLabels: type: red and a policy with *only* an ipBlock (as in the original example here) becomes a "deny all" policy.
https://github.com/openshift/origin/pull/19869
Verified in v3.10.0-0.58.0 and the issue has been fixed. Tested all cases mentioned in comment 4 and comment 5, the test results are as expected. OS: Red Hat Enterprise Linux Atomic Host release 7.5 kernel: Linux qe-np-master-etcd-1 3.10.0-862.2.3.el7.x86_64 #1 SMP Mon Apr 30 12:37:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816