Description of problem: When creating a health monitor for a IPv6 load-balancer with six members, only two members should remain active, but all six are. This result in a couple of failures to reach the application running on the members. Here are more details: (shiftstack) [stack@undercloud-0 ~]$ openstack loadbalancer list +--------------------------------------+------------------------------------------------------------------+----------------------------------+----------------------------------------+---------------------+------------------+----------+ | id | name | project_id | vip_address | provisioning_status | operating_status | provider | +--------------------------------------+------------------------------------------------------------------+----------------------------------+----------------------------------------+---------------------+------------------+----------+ | dceb09cc-e0d9-4d8e-89a6-e542717f9424 | kube_service_kubernetes_e2e-test-openstack-r6pc4_lb-etplocal-svc | fc03f8fefada405aa3b92a5cf9051387 | fd2e:6f44:5dd8:c956:f816:3eff:feac:66a | ACTIVE | ONLINE | ovn | +--------------------------------------+------------------------------------------------------------------+----------------------------------+----------------------------------------+---------------------+------------------+----------+ (shiftstack) [stack@undercloud-0 ~]$ openstack loadbalancer pool list +--------------------------------------+-------------------------------------------------------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | id | name | project_id | provisioning_status | protocol | lb_algorithm | admin_state_up | +--------------------------------------+-------------------------------------------------------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | 0537fa39-a81c-4e3f-82e1-008519ca84d6 | pool_0_kube_service_kubernetes_e2e-test-openstack-r6pc4_lb-etplocal-svc | fc03f8fefada405aa3b92a5cf9051387 | ACTIVE | TCP | SOURCE_IP_PORT | True | +--------------------------------------+-------------------------------------------------------------------------+----------------------------------+---------------------+----------+----------------+----------------+ (shiftstack) [stack@undercloud-0 ~]$ openstack loadbalancer member list pool_0_kube_service_kubernetes_e2e-test-openstack-r6pc4_lb-etplocal-svc +--------------------------------------+-----------------------------+----------------------------------+---------------------+-----------------------------------------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+-----------------------------+----------------------------------+---------------------+-----------------------------------------+---------------+------------------+--------+ | 46f2b898-c4eb-483d-8d03-db4072ce7d4f | ostest-hqqcn-master-0 | fc03f8fefada405aa3b92a5cf9051387 | ACTIVE | fd2e:6f44:5dd8:c956:f816:3eff:fe2a:1eac | 30501 | ONLINE | 1 | | 7d073ca2-0938-4932-91cc-d455635dd13c | ostest-hqqcn-worker-0-6srwp | fc03f8fefada405aa3b92a5cf9051387 | ACTIVE | fd2e:6f44:5dd8:c956:f816:3eff:fe06:cf4a | 30501 | ONLINE | 1 | | 80e11fb3-dfd1-4329-be09-e3224d3a36d9 | ostest-hqqcn-master-1 | fc03f8fefada405aa3b92a5cf9051387 | ACTIVE | fd2e:6f44:5dd8:c956:f816:3eff:fe46:52d2 | 30501 | ONLINE | 1 | | afed0710-ac65-4807-af31-41344faad470 | ostest-hqqcn-worker-0-ttq7x | fc03f8fefada405aa3b92a5cf9051387 | ACTIVE | fd2e:6f44:5dd8:c956:f816:3eff:fea4:1218 | 30501 | ONLINE | 1 | | c5c6429f-f595-47fa-a94a-2dd11eb38d27 | ostest-hqqcn-worker-0-rsf77 | fc03f8fefada405aa3b92a5cf9051387 | ACTIVE | fd2e:6f44:5dd8:c956:f816:3eff:fe09:1b3e | 30501 | ONLINE | 1 | | fd27b086-039c-4b16-8f0b-8bdc973214ba | ostest-hqqcn-master-2 | fc03f8fefada405aa3b92a5cf9051387 | ACTIVE | fd2e:6f44:5dd8:c956:f816:3eff:fe48:1ba0 | 30501 | ONLINE | 1 | +--------------------------------------+-----------------------------+----------------------------------+---------------------+-----------------------------------------+---------------+------------------+--------+ (shiftstack) [stack@undercloud-0 ~]$ openstack loadbalancer healthmonitor list +--------------------------------------+----------------------------------------------------------------------------+----------------------------------+------+----------------+ | id | name | project_id | type | admin_state_up | +--------------------------------------+----------------------------------------------------------------------------+----------------------------------+------+----------------+ | 7fe39588-998a-45b7-86b0-85952b36c253 | monitor_0_kube_service_kubernetes_e2e-test-openstack-r6pc4_lb-etplocal-svc | fc03f8fefada405aa3b92a5cf9051387 | TCP | True | +--------------------------------------+----------------------------------------------------------------------------+----------------------------------+------+----------------+ (shiftstack) [stack@undercloud-0 ~]$ oc get po -A -o wide |grep e2e e2e-test-openstack-r6pc4 lb-etplocal-dep-785b4575d8-fqmdz 1/1 Running 0 111s fd01:0:0:4::1b ostest-hqqcn-worker-0-ttq7x <none> <none> e2e-test-openstack-r6pc4 lb-etplocal-dep-785b4575d8-mdrs6 1/1 Running 0 111s fd01:0:0:6::d ostest-hqqcn-worker-0-6srwp <none> <none> sh-5.1# ovn-nbctl list load_balancer _uuid : 11058080-785b-4fe5-a1b2-b047bbf93b87 external_ids : {enabled=True, listener_2c62ffa0-5397-4f95-926e-6e990a1df03e="8082:pool_0537fa39-a81c-4e3f-82e1-008519ca84d6", lr_ref=neutron-94f17de0-91bc-4b3d-b808-e2cbdf963c66, ls_refs="{\"neutron-eba8acfd-b0e4-4874-b106-fa8542a82c4e\": 7}", "neutron:member_status"="{\"46f2b898-c4eb-483d-8d03-db4072ce7d4f\": \"ONLINE\", \"80e11fb3-dfd1-4329-be09-e3224d3a36d9\": \"ONLINE\", \"fd27b086-039c-4b16-8f0b-8bdc973214ba\": \"ONLINE\", \"7d073ca2-0938-4932-91cc-d455635dd13c\": \"ONLINE\", \"c5c6429f-f595-47fa-a94a-2dd11eb38d27\": \"ONLINE\", \"afed0710-ac65-4807-af31-41344faad470\": \"ONLINE\"}", "neutron:vip"="fd2e:6f44:5dd8:c956:f816:3eff:feac:66a", "neutron:vip_port_id"="5784c74e-83da-4a60-81db-faa23448a53c", "octavia:healthmonitors"="[\"7fe39588-998a-45b7-86b0-85952b36c253\"]", pool_0537fa39-a81c-4e3f-82e1-008519ca84d6="member_46f2b898-c4eb-483d-8d03-db4072ce7d4f_fd2e:6f44:5dd8:c956:f816:3eff:fe2a:1eac:30501_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_80e11fb3-dfd1-4329-be09-e3224d3a36d9_fd2e:6f44:5dd8:c956:f816:3eff:fe46:52d2:30501_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_fd27b086-039c-4b16-8f0b-8bdc973214ba_fd2e:6f44:5dd8:c956:f816:3eff:fe48:1ba0:30501_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_7d073ca2-0938-4932-91cc-d455635dd13c_fd2e:6f44:5dd8:c956:f816:3eff:fe06:cf4a:30501_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_c5c6429f-f595-47fa-a94a-2dd11eb38d27_fd2e:6f44:5dd8:c956:f816:3eff:fe09:1b3e:30501_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_afed0710-ac65-4807-af31-41344faad470_fd2e:6f44:5dd8:c956:f816:3eff:fea4:1218:30501_38898007-e0de-4cdf-b83e-ec8c5113bfd6"} health_check : [3ad121e5-08a8-4ec4-b7a2-b5dbadcc689d] ip_port_mappings : {"fd2e:6f44:5dd8:c956:f816:3eff:fe06:cf4a"="c5f06200-d036-432a-b1f2-8266075cfb0e:fd2e:6f44:5dd8:c956:f816:3eff:fe4d:3ab9", "fd2e:6f44:5dd8:c956:f816:3eff:fe09:1b3e"="0f8c9ee5-e322-4101-a74a-e9dd8b4db132:fd2e:6f44:5dd8:c956:f816:3eff:fe4d:3ab9", "fd2e:6f44:5dd8:c956:f816:3eff:fe2a:1eac"="2c60050e-1732-49e6-b194-3981d015fa5e:fd2e:6f44:5dd8:c956:f816:3eff:fe4d:3ab9", "fd2e:6f44:5dd8:c956:f816:3eff:fe46:52d2"="5609e438-b02e-48b0-a188-1bc53be90835:fd2e:6f44:5dd8:c956:f816:3eff:fe4d:3ab9", "fd2e:6f44:5dd8:c956:f816:3eff:fe48:1ba0"="f148f0c3-8d0d-4d00-94b3-bbb3b68cc8d8:fd2e:6f44:5dd8:c956:f816:3eff:fe4d:3ab9", "fd2e:6f44:5dd8:c956:f816:3eff:fea4:1218"="6fa3c8cc-c1c7-45a7-a445-b7c50324a469:fd2e:6f44:5dd8:c956:f816:3eff:fe4d:3ab9"} name : "dceb09cc-e0d9-4d8e-89a6-e542717f9424" options : {} protocol : tcp selection_fields : [ip_dst, ip_src, tp_dst, tp_src] vips : {"[fd2e:6f44:5dd8:c956:f816:3eff:feac:66a]:8082"="[fd2e:6f44:5dd8:c956:f816:3eff:fe2a:1eac]:30501,[fd2e:6f44:5dd8:c956:f816:3eff:fe46:52d2]:30501,[fd2e:6f44:5dd8:c956:f816:3eff:fe48:1ba0]:30501,[fd2e:6f44:5dd8:c956:f816:3eff:fe06:cf4a]:30501,[fd2e:6f44:5dd8:c956:f816:3eff:fe09:1b3e]:30501,[fd2e:6f44:5dd8:c956:f816:3eff:fea4:1218]:30501"} Some error on the logs: 2024-02-26 23:34:13.040 12 DEBUG ovn_octavia_provider.maintenance [-] Maintenance task: checking device_owner for OVN LB HM ports. change_device_owner_lb_hm_ports /usr/lib/python3.9/site-packages/ovn_octavia_provider/maintenance.py:76 2024-02-26 23:34:13.040 12 ERROR futurist.periodics [-] Failed to call periodic 'ovn_octavia_provider.maintenance.DBInconsistenciesPeriodics.change_device_owner_lb_hm_ports' (it runs every 600.00 seconds): AttributeError: 'Client' object has no attribute 'ports' 2024-02-26 23:34:13.040 12 ERROR futurist.periodics Traceback (most recent call last): 2024-02-26 23:34:13.040 12 ERROR futurist.periodics File "/usr/lib/python3.9/site-packages/futurist/periodics.py", line 293, in run 2024-02-26 23:34:13.040 12 ERROR futurist.periodics work() 2024-02-26 23:34:13.040 12 ERROR futurist.periodics File "/usr/lib/python3.9/site-packages/futurist/periodics.py", line 67, in __call__ 2024-02-26 23:34:13.040 12 ERROR futurist.periodics return self.callback(*self.args, **self.kwargs) 2024-02-26 23:34:13.040 12 ERROR futurist.periodics File "/usr/lib/python3.9/site-packages/futurist/periodics.py", line 181, in decorator 2024-02-26 23:34:13.040 12 ERROR futurist.periodics return f(*args, **kwargs) 2024-02-26 23:34:13.040 12 ERROR futurist.periodics File "/usr/lib/python3.9/site-packages/ovn_octavia_provider/maintenance.py", line 79, in change_device_owner_lb_hm_ports 2024-02-26 23:34:13.040 12 ERROR futurist.periodics ovn_lb_hm_ports = neutron_client.ports( 2024-02-26 23:34:13.040 12 ERROR futurist.periodics AttributeError: 'Client' object has no attribute 'ports' 2024-02-26 23:34:13.040 12 ERROR futurist.periodics Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Failed on RHOS-17.1-RHEL-9-20240329.n.1. The testcase "[sig-installer][Suite:openshift/openstack][lb][Serial] The Openstack platform should apply lb-method on TCP OVN LoadBalancer when a TCP svc with monitors and ETP:Local is created on Openshift" is still failing as it is observed that all LB members are ONLINE despite the healthmonitor is in place. OCP used: 4.15.0-0.nightly-2024-04-07-120427.
Fix verified on RHOS-17.1-RHEL-9-20240501.n.1. Running testcase with 4.16.0-0.nightly-2024-05-01-111315, both testcases passes: [stack@undercloud-0 lb_ovn]$ grep ^passed openstack-test.log | grep monitor passed: (48.4s) 2024-05-03T12:51:00 "[sig-installer][Suite:openshift/openstack][lb][Serial] The Openstack platform should apply lb-method on UDP OVN LoadBalancer when a UDP svc with monitors and ETP:Local is created on Openshift" passed: (32.6s) 2024-05-03T12:51:38 "[sig-installer][Suite:openshift/openstack][lb][Serial] The Openstack platform should apply lb-method on TCP OVN LoadBalancer when a TCP svc with monitors and ETP:Local is created on Openshift" where: $ oc get cm -n openshift-config cloud-provider-config -o json | jq -r .data.config [Global] secret-name = openstack-credentials secret-namespace = kube-system region = regionOne ca-file = /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem[LoadBalancer] floating-network-id = 46c40e51-8942-4b7a-ae52-3ca9aa955881 lb-provider = ovn lb-method = SOURCE_IP_PORT create-monitor = False monitor-delay = 10s monitor-timeout = 10s monitor-max-retries = 1 max-shared-lb = 2
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 17.1.3 bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:2741