Bug 1828699 - NPs on svc not enforced when exposed port and target are different
Summary: NPs on svc not enforced when exposed port and target are different
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.4
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.3.z
Assignee: Luis Tomas Bolivar
QA Contact: GenadiC
URL:
Whiteboard:
Depends On: 1828388
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-28 07:41 UTC by Luis Tomas Bolivar
Modified: 2020-05-27 17:00 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 1828388
Environment:
Last Closed: 2020-05-27 17:00:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 227 0 None closed Bug 1828699: Ensure NP are enforced on SVC with different port and target port 2020-05-25 15:50:15 UTC
Red Hat Product Errata RHBA-2020:2184 0 None None None 2020-05-27 17:00:58 UTC

Description Luis Tomas Bolivar 2020-04-28 07:41:46 UTC
+++ This bug was initially created as a clone of Bug #1828388 +++

+++ This bug was initially created as a clone of Bug #1828387 +++

NP rules are not properly updated on LoadBalancer security groups when the exposed port and the target port does not match

Comment 3 rlobillo 2020-05-18 15:43:35 UTC
Verified on: OCP 4.3.0-0.nightly-2020-05-18-043018 && OSP 13.0.11 puddle 2020-04-01.3 

After creating a service that exposes a different port (80) than the one in the pod behind (8080), if a network policy is created for blocking the ingress
traffic to the pod, the security rules are now applied in the load balancer and the service is not reachable.

Given below service:

$ oc describe svc demo-1-c4dxk
Name:              demo-1-c4dxk
Namespace:         test
Labels:            deployment=demo-1
                   deploymentconfig=demo
                   run=demo
Annotations:       openstack.org/kuryr-lbaas-spec:
                     {"versioned_object.data": {"ip": "172.30.168.161", "lb_ip": null, "ports": [{"versioned_object.data": {"name": null, "port": 80, "protocol...
Selector:          deployment=demo-1,deploymentconfig=demo,run=demo
Type:              ClusterIP
IP:                172.30.168.161
Port:              <unset>  80/TCP
TargetPort:        8080/TCP
Endpoints:         10.128.106.3:8080
Session Affinity:  None
Events:            <none>

The application of below NP rule:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: np
spec:
  podSelector:
    matchLabels:
      run: demo
  ingress:
  - from:
    - podSelector:
        matchLabels:
          run: demo-allowed-caller

is creating below rule on the loadbalancer associated with this service:

(overcloud) [stack@undercloud-0 ~]$ openstack security group show 7445b306-ed5c-4293-9c0a-f6f2be5787d9 -f value -c rules
created_at='2020-05-18T15:23:03Z', description='test/demo-1-c4dxk:TCP:80', direction='ingress', ethertype='IPv4', id='1b5298a0-fa19-4269-b59d-4b06847ae847', port_range_max='80', port_range_min='80', protocol='tcp', remote_ip_prefix='10.128.106.21/32', updated_at='2020-05-18T15:23:03Z'
[...]

where remote_ip_prefix matches with demo-allowed-caller POD IP.

Connectivity confirmed:

(overcloud) [stack@undercloud-0 ~]$ oc rsh pod/demo-allowed-caller-1-pmskm curl 172.30.168.161 #allowed POD
demo-1-c4dxk: HELLO! I AM ALIVE!!!
(overcloud) [stack@undercloud-0 ~]$ oc rsh demo-caller-1-nf7q6 curl 172.30.168.161 # any other POD
^Ccommand terminated with exit code 130

Comment 5 errata-xmlrpc 2020-05-27 17:00:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2184


Note You need to log in before you can comment on or make changes to this bug.