Bug 1937459 - Wrong Subnet retrieved for Service without Selector
Summary: Wrong Subnet retrieved for Service without Selector
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.8.0
Assignee: Michał Dulko
QA Contact: Itzik Brown
URL:
Whiteboard:
Depends On:
Blocks: 1970320
TreeView+ depends on / blocked
 
Reported: 2021-03-10 17:05 UTC by Maysa Macedo
Modified: 2021-07-27 22:53 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 22:52:37 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 476 0 None open Bug 1937459: Fix Subnet retrival when creating Service without Selector 2021-03-10 17:06:17 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:53:05 UTC

Description Maysa Macedo 2021-03-10 17:05:16 UTC
Description of problem:

When retrieving a Subnet for creating the LB member with ovn provider
for a Service without selector that has Endpoints addresses that are not from Pods, only the subnet is retrieved
when it should be the subnet and the cidr, as it's needed
to verify whether the member ip is present on the cidr.

when only the subnet is retrieved the following traceback happens:

2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging Traceback (most recent call last):
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/handlers/logging.py", line 37, in __call__
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self._handler(event, *args, **kwargs)
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/handlers/retry.py", line 80, in __call__
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self._handler(event, *args, **kwargs)
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/handlers/k8s_base.py", line 84, in __call__
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self.on_present(obj)
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/controller/handlers/loadbalancer.py", line 81, in on_present
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging if self._sync_lbaas_members(loadbalancer_crd):
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/controller/handlers/loadbalancer.py", line 181, in _sync_lba
as_members
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self._add_new_members(loadbalancer_crd)):
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/controller/handlers/loadbalancer.py", line 289, in _add_new_
members
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging target_pod, target_ip, loadbalancer_crd)
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/controller/handlers/loadbalancer.py", line 364, in _get_subn
et_by_octavia_mode
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self._drv_nodes_subnets.get_nodes_subnets(), target_ip)
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/utils.py", line 619, in get_subnet_by_ip
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging if ip in ipaddress.ip_network(nodes_subnet[1]):
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/lib64/python3.6/ipaddress.py", line 84, in ip_network
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging address)
2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging ValueError: 'a' does not appear to be an IPv4 or IPv6 network


Version-Release number of selected component (if applicable):


How reproducible:

create a service without selector that is pointing to and address at the nodes network.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Maysa Macedo 2021-04-27 10:48:31 UTC
Steps to reproduce/verify:

1. Install a cluster with OVN Octavia Provider configured
2. Create a Pod running on host-network
3. Create a Service without Selector e.g https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
4. Create an Endpoint pointing to that Pod on host-network

Comment 3 Itzik Brown 2021-05-12 10:54:29 UTC
OC 4.8.0-0.nightly-2021-05-10-225140
OSP RHOS-16.1-RHEL-8-20210323.n.0

deploy.yaml
------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
  labels:
    app: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      hostNetwork: true
      containers:
      - name: demo
        image: quay.io/kuryr/demo
        ports:
        - containerPort: 8080

service.yaml
-------------
apiVersion: v1
kind: Service
metadata:
  name: demo
labels:
  app: demo
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080

Create pods 
------------
$ oc create -f deploy.yaml

Created a service
-----------------
$ oc create -f service.yaml

Get pods addresses
------------------
$ oc get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP             NODE                            NOMINATED NODE   READINESS GATES
demo-7f5775d4fd-49vgw   1/1     Running   0          39m   10.196.3.141   ostest-9m8zj-worker-100-42rrx   <none>           <none>
demo-7f5775d4fd-hkz95   1/1     Running   0          39m   10.196.2.6     ostest-9m8zj-worker-100-wxpkh   <none>           <none>


endpoints.yaml 
--------------
apiVersion: v1
kind: Endpoints
metadata:
  name: demo
subsets:
  - addresses:
      - ip: 10.196.3.141
      - ip: 10.196.2.6
    ports:
      - port: 8080

Create endpoints
----------------
$ oc create -f endpoints.yaml


Open port 8080 in the worker security group
-------------------------------------------
$ . shiftstackrc; openstack security group rule create --dst-port 8080 --ingress --protocol tcp ostest-9m8zj-worker

Get Service IP
--------------
$ oc get svc
NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)   AGE
demo                      ClusterIP      172.30.56.222   <none>                                 80/TCP    92m

Connect to the service from a pod 
---------------------------------
$ oc exec -it  demo-7f5775d4fd-49vgw -- curl 172.30.56.222
ostest-9m8zj-worker-100-42rrx: HELLO! I AM ALIVE!!!
$ oc exec -it  demo-7f5775d4fd-49vgw -- curl 172.30.56.222
ostest-9m8zj-worker-100-wxpkh: HELLO! I AM ALIVE!!!

Comment 6 errata-xmlrpc 2021-07-27 22:52:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.