Description of problem: When retrieving a Subnet for creating the LB member with ovn provider for a Service without selector that has Endpoints addresses that are not from Pods, only the subnet is retrieved when it should be the subnet and the cidr, as it's needed to verify whether the member ip is present on the cidr. when only the subnet is retrieved the following traceback happens: 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging Traceback (most recent call last): 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/handlers/logging.py", line 37, in __call__ 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self._handler(event, *args, **kwargs) 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/handlers/retry.py", line 80, in __call__ 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self._handler(event, *args, **kwargs) 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/handlers/k8s_base.py", line 84, in __call__ 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self.on_present(obj) 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/controller/handlers/loadbalancer.py", line 81, in on_present 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging if self._sync_lbaas_members(loadbalancer_crd): 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/controller/handlers/loadbalancer.py", line 181, in _sync_lba as_members 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self._add_new_members(loadbalancer_crd)): 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/controller/handlers/loadbalancer.py", line 289, in _add_new_ members 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging target_pod, target_ip, loadbalancer_crd) 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/controller/handlers/loadbalancer.py", line 364, in _get_subn et_by_octavia_mode 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging self._drv_nodes_subnets.get_nodes_subnets(), target_ip) 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/local/lib/python3.6/site-packages/kuryr_kubernetes/utils.py", line 619, in get_subnet_by_ip 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging if ip in ipaddress.ip_network(nodes_subnet[1]): 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging File "/usr/lib64/python3.6/ipaddress.py", line 84, in ip_network 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging address) 2021-03-03 01:14:10.397 1 ERROR kuryr_kubernetes.handlers.logging ValueError: 'a' does not appear to be an IPv4 or IPv6 network Version-Release number of selected component (if applicable): How reproducible: create a service without selector that is pointing to and address at the nodes network. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Steps to reproduce/verify: 1. Install a cluster with OVN Octavia Provider configured 2. Create a Pod running on host-network 3. Create a Service without Selector e.g https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors 4. Create an Endpoint pointing to that Pod on host-network
OC 4.8.0-0.nightly-2021-05-10-225140 OSP RHOS-16.1-RHEL-8-20210323.n.0 deploy.yaml ------------ apiVersion: apps/v1 kind: Deployment metadata: name: demo labels: app: demo spec: replicas: 2 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: hostNetwork: true containers: - name: demo image: quay.io/kuryr/demo ports: - containerPort: 8080 service.yaml ------------- apiVersion: v1 kind: Service metadata: name: demo labels: app: demo spec: ports: - port: 80 protocol: TCP targetPort: 8080 Create pods ------------ $ oc create -f deploy.yaml Created a service ----------------- $ oc create -f service.yaml Get pods addresses ------------------ $ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES demo-7f5775d4fd-49vgw 1/1 Running 0 39m 10.196.3.141 ostest-9m8zj-worker-100-42rrx <none> <none> demo-7f5775d4fd-hkz95 1/1 Running 0 39m 10.196.2.6 ostest-9m8zj-worker-100-wxpkh <none> <none> endpoints.yaml -------------- apiVersion: v1 kind: Endpoints metadata: name: demo subsets: - addresses: - ip: 10.196.3.141 - ip: 10.196.2.6 ports: - port: 8080 Create endpoints ---------------- $ oc create -f endpoints.yaml Open port 8080 in the worker security group ------------------------------------------- $ . shiftstackrc; openstack security group rule create --dst-port 8080 --ingress --protocol tcp ostest-9m8zj-worker Get Service IP -------------- $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo ClusterIP 172.30.56.222 <none> 80/TCP 92m Connect to the service from a pod --------------------------------- $ oc exec -it demo-7f5775d4fd-49vgw -- curl 172.30.56.222 ostest-9m8zj-worker-100-42rrx: HELLO! I AM ALIVE!!! $ oc exec -it demo-7f5775d4fd-49vgw -- curl 172.30.56.222 ostest-9m8zj-worker-100-wxpkh: HELLO! I AM ALIVE!!!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438