Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1813846 - Pod selector in network policy not working for newly created pods
Summary: Pod selector in network policy not working for newly created pods
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.5.0
Assignee: Alexander Constantinescu
QA Contact: huirwang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-16 09:57 UTC by Federico Paolinelli
Modified: 2020-07-13 17:20 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:20:18 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift sdn pull 122 0 None closed Bug 1813846: handle default-deny rule properely 2020-09-26 11:33:57 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:20:41 UTC

Description Federico Paolinelli 2020-03-16 09:57:05 UTC
Description of problem:

When creating a network policy with a pod selector, it's not getting applied to new pods created after the policy matching the pod selector.


Version-Release number of selected component (if applicable):
Client Version: v4.2.0
Server Version: 4.5.0-0.nightly-2020-03-16-004817
Kubernetes Version: v1.17.1


How reproducible:
Always

Steps to Reproduce:
1. Have three files:
policy.yaml:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: policytest
spec:
  podSelector:
    matchLabels:
      app: policytest
  policyTypes:
  - Ingress
  - Egress

server.yaml:
apiVersion: v1
kind: Pod
metadata:
  name: tcpserver
  namespace: policytest
  labels:
    app: policytest
spec:
  containers:
    - name: server
      image: fedora:31
      command: ["/bin/sh", "-c"]
      args:
        ["dnf install -y nc && sleep infinity"]
      ports:
        - containerPort: 30100
          protocol: TCP

client.yaml:
apiVersion: v1
kind: Pod
metadata:
  name: tcpclient
  namespace: policytest
  labels:
    app: policytest
spec:
  containers:
    - name: client
      image: fedora:31
      command: ["/bin/sh", "-c"]
      args:
        ["dnf install -y nc && sleep infinity"]



2. Create the namespace and apply them:

oc create ns policytest
oc create -f policy.yaml
oc create -f server.yaml
oc create -f client.yaml


3. 

pick the ip of the server with

oc get pods -n policytest -o wide

Bash into the pods:

oc exec -it -n policytest tcpserver bash
oc exec -it -n policytest tcpclient bash

On the server side:

nc -k -l -p 30100

On the client side, passing the server pod's address taken from oc get pod -o wide
[root@tcpclient /]# echo aaa | nc 10.129.2.16 30100

Actual results:


The message goes through:
[root@tcpserver /]# nc -k -l -p 30100
aaa


Expected results:

Message not going through as the policy should block

Additional info:

If I create the pod and then create the policy, it works, and the message does not get through:

oc create -f server.yaml
oc create -f policy.yaml
oc create -f client.yaml

Comment 2 Jason Boxman 2020-03-17 14:03:25 UTC
Hi,

Is this an issue in earlier versions (4.1 - 4.3) of OCP? If so, I can add a note to the docs.

Thanks.

Comment 3 huirwang 2020-03-18 05:27:51 UTC
Yes, it is can be reproduced in 4.1/4.2/4.3 too.

Comment 4 Jason Boxman 2020-03-18 05:37:56 UTC
Is there any workaround for this?

Thanks!

Comment 5 huirwang 2020-03-18 05:39:21 UTC
Delete networkpolicy and create it again.

Comment 8 huirwang 2020-03-20 09:33:18 UTC
Verified in version 4.5.0-0.nightly-2020-03-20-044324 .

Followed steps as description. The networkpolicy works.
oc create ns policytest
oc create -f policy.yaml
oc create -f server.yaml
oc create -f client.yaml


oc get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP            NODE                                         NOMINATED NODE   READINESS GATES
tcpclient   1/1     Running   0          14s   10.130.2.17   ip-10-0-172-24.us-east-2.compute.internal    <none>           <none>
tcpserver   1/1     Running   0          24s   10.129.2.9    ip-10-0-138-102.us-east-2.compute.internal   <none>           <none>
huiran-mac:script hrwang$ oc rsh tcpserver 
sh-5.0# 
sh-5.0# 
sh-5.0# nc -k -l -p 30011


oc rsh tcpclient
sh-5.0# echo aaa | nc 10.129.2.9 30011
Ncat: TIMEOUT.

Comment 9 Jason Boxman 2020-03-26 02:05:49 UTC
I created a PR[0] to mention this in the docs, for customers that may not immediately upgrade to the latest version with a fix.

[0] https://github.com/openshift/openshift-docs/pull/20691

Comment 11 errata-xmlrpc 2020-07-13 17:20:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.