Description of problem: Bulk member update returns 500 on PUT request: 2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils [req-8ad8a6ce-59cc-4531-94fa-d7918406d19f - 25b1efe043b54816ab6bd8e2b6b0d9c8 - default default] Provider 'ovn' raised an unknown error: member_batch_update() missing 1 required positional argument: 'members': TypeError: member_batch_update() missing 1 required positional argument: 'members' 2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils Traceback (most recent call last): 2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils File "/usr/lib/python3.6/site-packages/octavia/api/drivers/utils.py", line 55, in call_provider 2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils return driver_method(*args, **kwargs) 2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils TypeError: member_batch_update() missing 1 required positional argument: 'members' 2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils 2022-06-22 14:05:54.636 14 ERROR wsme.api [req-8ad8a6ce-59cc-4531-94fa-d7918406d19f - 25b1efe043b54816ab6bd8e2b6b0d9c8 - default default] Server-side error: "Provider 'ovn' reports error: member_batch_update() missing 1 required positional argument: 'members'". Detail: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/octavia/api/drivers/utils.py", line 55, in call_provider return driver_method(*args, **kwargs) TypeError: member_batch_update() missing 1 required positional argument: 'members' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/wsmeext/pecan.py", line 85, in callfunction result = f(self, *args, **kwargs) File "/usr/lib/python3.6/site-packages/octavia/api/v2/controllers/member.py", line 431, in put driver.name, driver.member_batch_update, provider_members) File "/usr/lib/python3.6/site-packages/octavia/api/drivers/utils.py", line 85, in call_provider raise exceptions.ProviderDriverError(prov=provider, user_msg=e) octavia.common.exceptions.ProviderDriverError: Provider 'ovn' reports error: member_batch_update() missing 1 required positional argument: 'members' : octavia.common.exceptions.ProviderDriverError: Provider 'ovn' reports error: member_batch_update() missing 1 required positional argument: 'members' https://opendev.org/openstack/networking-ovn/src/branch/stable/train/networking_ovn/octavia/ovn_driver.py#L2448 - here's the code of the OVN driver. https://opendev.org/openstack/octavia/src/branch/stable/train/octavia/api/drivers/amphora_driver/v2/driver.py#L217 - this is same code for the Amphora, definition is different. Version-Release number of selected component (if applicable): I see this in 16.2_20220418.1, but most likely happens elsewhere as bug is in upstream stable/train How reproducible: Always Steps to Reproduce: 1. Just do a batch update with an OVN LB. Actual results: 500 Expected results: Batch update Additional info:
The prototype of the member_batch_update function in provider API in Train (16.x) is: https://opendev.org/openstack/octavia/src/branch/stable/train/octavia/api/drivers/amphora_driver/v2/driver.py#L217 def member_batch_update(self, members) A change (https://review.opendev.org/c/openstack/octavia/+/688548/) was later added in Ussuri, the function is now: https://opendev.org/openstack/octavia/src/branch/stable/ussuri/octavia/api/drivers/amphora_driver/v2/driver.py#L248 def member_batch_update(self, pool_id, members) So this change: https://review.opendev.org/c/openstack/networking-ovn/+/746134 should have never been backported to Train
Verified on RHOS-16.2-RHEL-8-20221104.n.0 using 4.12.0-0.nightly-2022-11-07-181244 Creating OCP loadbalancer with OVN provider enabled and then update the number of workers so a change in the pool members is triggered by CCM does not trigger an exception on octavia_api container in controllers anymore. The pool remains operative and the functionality is correctly provided. 1. Enable OVN provider on OCP: $ oc patch cm/cloud-provider-config -n openshift-config --patch-file cloud-provider-config.patch.yaml and wait until change is applied. 2. Create a namespace with some pods and a loadbalancer service: $ oc apply -f default-manifests.yaml 3. Check that the loadbalancer is created in openstack and it is functioning: $ openstack loadbalancer list +--------------------------------------+------------------------------------------------------------------+----------------------------------+--------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+------------------------------------------------------------------+----------------------------------+--------------+---------------------+----------+ | cbba06c6-bee1-465c-8f79-87a42dda7d7d | kube_service_kubernetes_udp-lb-default-ovn-ns_udp-lb-default-svc | a67099008f734333b971c5ef5d00f1b3 | 10.196.3.119 | ACTIVE | ovn | +--------------------------------------+------------------------------------------------------------------+----------------------------------+--------------+---------------------+----------+ $ openstack loadbalancer pool list --loadbalancer kube_service_kubernetes_udp-lb-default-ovn-ns_udp-lb-default-svc +--------------------------------------+-------------------------------------------------------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | id | name | project_id | provisioning_status | protocol | lb_algorithm | admin_state_up | +--------------------------------------+-------------------------------------------------------------------------+----------------------------------+---------------------+----------+----------------+----------------+ | 1157626a-bd78-4c1a-95b7-9c52911207b7 | pool_0_kube_service_kubernetes_udp-lb-default-ovn-ns_udp-lb-default-svc | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | UDP | SOURCE_IP_PORT | True | +--------------------------------------+-------------------------------------------------------------------------+----------------------------------+---------------------+----------+----------------+----------------+ $ openstack loadbalancer member list pool_0_kube_service_kubernetes_udp-lb-default-ovn-ns_udp-lb-default-svc +--------------------------------------+-----------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+-----------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | 14bddf1d-4ee2-4e24-92b3-eb77ae953e69 | ostest-6pb7h-worker-0-t8jxx | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.2.128 | 30861 | ONLINE | 1 | | 7753634f-53db-4aad-b991-194e2e8e7bb6 | ostest-6pb7h-worker-0-qgbpv | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.3.150 | 30861 | ONLINE | 1 | | b93afab0-2704-4f2c-9e1b-325e159e9b3d | ostest-6pb7h-worker-0-pc7zg | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.1.179 | 30861 | ONLINE | 1 | | c9ae004c-ba6c-43b2-8eac-3d3b08811b29 | ostest-6pb7h-worker-0-6zf54 | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.0.168 | 30861 | ONLINE | 1 | | df29239c-baa3-49c4-8062-1a8dfa70ef14 | ostest-6pb7h-worker-0-zcl7q | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.0.248 | 30861 | ONLINE | 1 | | fa914724-68ba-4304-ba66-d57fc6dcc1b7 | ostest-6pb7h-master-2 | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.0.215 | 30861 | ONLINE | 1 | | ff1f0fb2-ef65-4495-af27-a3d6d2c83f16 | ostest-6pb7h-master-0 | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.1.7 | 30861 | ONLINE | 1 | | 3a0810ce-507b-49f7-94de-f61f218354a5 | ostest-6pb7h-master-1 | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.2.186 | 30861 | NO_MONITOR | 1 | +--------------------------------------+-----------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ $ cat <(echo hostname) <(sleep 1) | nc -w 1 -u 10.46.43.160 8082 udp-lb-default-dep-7b8784986b-njlx6 $ cat <(echo hostname) <(sleep 1) | nc -w 1 -u 10.46.43.160 8082 udp-lb-default-dep-7b8784986b-vfqgf $ oc get endpoints/udp-lb-default-svc -n udp-lb-default-ovn-ns -o json | jq .subsets[].addresses[].targetRef.name "udp-lb-default-dep-7b8784986b-vfqgf" "udp-lb-default-dep-7b8784986b-njlx6" 4. With the given members, spawn a new worker just to trigger the batch member update on octavia: $ oc scale machineset/ostest-6pb7h-worker-0 -n openshift-machine-api --replicas=6 machineset.machine.openshift.io/ostest-6pb7h-worker-0 scaled [root@controller-2 ~]# podman exec -it octavia_api tail -f /var/log/octavia/octavia.log [...] 2022-11-09 09:29:48.786 17 DEBUG octavia.db.repositories [req-5ded39e5-4ff1-4392-a540-17a63f25ed6e - a67099008f734333b971c5ef5d00f1b3 - default default] Checking quota for project: a67099008f734333b971c5ef5d00f1b3 object: <class 'octavia.common.data_models.Member'> check_quota_met /usr/lib/python3.6/site-packages/octavia/db/repositories.py:372 2022-11-09 09:29:48.837 17 INFO octavia.api.v2.controllers.member [req-5ded39e5-4ff1-4392-a540-17a63f25ed6e - a67099008f734333b971c5ef5d00f1b3 - default default] Sending Pool 1157626a-bd78-4c1a-95b7-9c52911207b7 batch member update to provider ovn 2022-11-09 09:29:48.845 17 DEBUG networking_ovn.octavia.ovn_driver [-] Handling request member_create with info {'id': '38f7ef93-8b1e-487e-a7ea-30a2b0aa89d6', 'address': '10.196.2.206', 'protocol_port': 30861, 'pool_id': '1157626a-bd78-4c1a-95b7-9c52911207b7', 'subnet_id': '6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d', 'admin_state_up': True} request_handler /usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py:500 2022-11-09 09:29:48.848 17 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(table=Load_Balancer, record=db24dfbb-947f-4ccb-8f68-139339259154, col_values=(('external_ids', {'pool_1157626a-bd78-4c1a-95b7-9c52911207b7': 'member_fa914724-68ba-4304-ba66-d57fc6dcc1b7_10.196.0.215:30861_6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d,member_c9ae004c-ba6c-43b2-8eac-3d3b08811b29_10.196.0.168:30861_6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d,member_df29239c-baa3-49c4-8062-1a8dfa70ef14_10.196.0.248:30861_6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d,member_14bddf1d-4ee2-4e24-92b3-eb77ae953e69_10.196.2.128:30861_6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d,member_b93afab0-2704-4f2c-9e1b-325e159e9b3d_10.196.1.179:30861_6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d,member_7753634f-53db-4aad-b991-194e2e8e7bb6_10.196.3.150:30861_6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d,member_ff1f0fb2-ef65-4495-af27-a3d6d2c83f16_10.196.1.7:30861_6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d,member_3a0810ce-507b-49f7-94de-f61f218354a5_10.196.2.186:30861_6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d,member_38f7ef93-8b1e-487e-a7ea-30a2b0aa89d6_10.196.2.206:30861_6bfa3b94-9e02-41ef-8f6f-bfb00b88f63d'}),)) do_commit /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:88 2022-11-09 09:29:48.851 17 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbClearCommand(table=Load_Balancer, record=db24dfbb-947f-4ccb-8f68-139339259154, column=vips) do_commit /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:88 2022-11-09 09:29:48.851 17 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(table=Load_Balancer, record=db24dfbb-947f-4ccb-8f68-139339259154, col_values=(('vips', {'10.196.3.119:8082': '10.196.0.215:30861,10.196.0.168:30861,10.196.0.248:30861,10.196.2.128:30861,10.196.1.179:30861,10.196.3.150:30861,10.196.1.7:30861,10.196.2.186:30861,10.196.2.206:30861', '10.46.43.160:8082': '10.196.0.215:30861,10.196.0.168:30861,10.196.0.248:30861,10.196.2.128:30861,10.196.1.179:30861,10.196.3.150:30861,10.196.1.7:30861,10.196.2.186:30861,10.196.2.206:30861'}),)) do_commit /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:88 $ openstack loadbalancer member list pool_0_kube_service_kubernetes_udp-lb-default-ovn-ns_udp-lb-default-svc +--------------------------------------+-----------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+-----------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | 14bddf1d-4ee2-4e24-92b3-eb77ae953e69 | ostest-6pb7h-worker-0-t8jxx | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.2.128 | 30861 | ONLINE | 1 | | 7753634f-53db-4aad-b991-194e2e8e7bb6 | ostest-6pb7h-worker-0-qgbpv | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.3.150 | 30861 | ONLINE | 1 | | b93afab0-2704-4f2c-9e1b-325e159e9b3d | ostest-6pb7h-worker-0-pc7zg | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.1.179 | 30861 | ONLINE | 1 | | c9ae004c-ba6c-43b2-8eac-3d3b08811b29 | ostest-6pb7h-worker-0-6zf54 | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.0.168 | 30861 | ONLINE | 1 | | df29239c-baa3-49c4-8062-1a8dfa70ef14 | ostest-6pb7h-worker-0-zcl7q | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.0.248 | 30861 | ONLINE | 1 | | fa914724-68ba-4304-ba66-d57fc6dcc1b7 | ostest-6pb7h-master-2 | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.0.215 | 30861 | ONLINE | 1 | | ff1f0fb2-ef65-4495-af27-a3d6d2c83f16 | ostest-6pb7h-master-0 | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.1.7 | 30861 | ONLINE | 1 | | 3a0810ce-507b-49f7-94de-f61f218354a5 | ostest-6pb7h-master-1 | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.2.186 | 30861 | ONLINE | 1 | | 38f7ef93-8b1e-487e-a7ea-30a2b0aa89d6 | ostest-6pb7h-worker-0-cnjs6 | a67099008f734333b971c5ef5d00f1b3 | ACTIVE | 10.196.2.206 | 30861 | NO_MONITOR | 1 | +--------------------------------------+-----------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ $ cat <(echo hostname) <(sleep 1) | nc -w 1 -u 10.46.43.160 8082 udp-lb-default-dep-7b8784986b-vfqgf $ cat <(echo hostname) <(sleep 1) | nc -w 1 -u 10.46.43.160 8082 udp-lb-default-dep-7b8784986b-njlx6
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 16.2.4), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:8794