Bug 1965308

Summary: haproxy port on wrong subnet when network has multiple subnets
Product: Red Hat OpenStack Reporter: anil venkata <vkommadi>
Component: openstack-octaviaAssignee: Gregory Thiemonge <gthiemon>
Status: CLOSED ERRATA QA Contact: Bruna Bonguardo <bbonguar>
Severity: high Docs Contact:
Priority: high    
Version: 16.1 (Train)CC: bperkins, gthiemon, ihrachys, jraju, lpeer, majopela, michjohn, oschwart, scohen
Target Milestone: betaKeywords: Triaged
Target Release: 17.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-octavia-8.0.2-1.20221208181214.b0379d6.el9ost Doc Type: Bug Fix
Doc Text:
Before this update, the Load-balancing service (octavia) could unplug a required subnet when you used different subnets from the same network as members' subnets. The members attached to this subnet were unreachable. With this update, the Load-balancing service does not unplug required subnets, and the load balancer can reach subnet members.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-08-16 01:10:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
rally test case none

Description anil venkata 2021-05-27 12:28:20 UTC
I have LB1 and LB2. LB1 pool member is subnet1 on network1. LB2 pool member is using subnet2 on same network1.
But in this case LB2 haproxy is getting IP address on subnet1 instead of subnet2. Because of this, http request on LB2 vip is failing.

LB1 in this case is
2021-05-27 10:42:06.243 524357 INFO octavia-fully-populated-loadbalancer [-] Loadbalancer {'listeners': [{'l7policies': [], 'id': '0c43ff0a-0f49-45db-b1ec-7941b8241271', 'name': 's_rally_bd5a7d75_TBZ5VRWJ', 'description': '', 'provisioning_status': 'PENDING_CREATE', 'operating_status': 'OFFLINE', 'admin_state_up': True, 'protocol': 'HTTP', 'protocol_port': 80, 'connection_limit': -1, 'default_tls_container_ref': None, 'sni_container_refs': [], 'project_id': 'a35bb21f6111427487d1b2351ffa78bf', 'default_pool_id': 'a19021a0-1fbd-4bbb-a157-e1544f96ef98', 'insert_headers': {}, 'created_at': '2021-05-27T10:41:09', 'updated_at': '2021-05-27T10:41:09', 'timeout_client_data': 50000, 'timeout_member_connect': 5000, 'timeout_member_data': 50000, 'timeout_tcp_inspect': 0, 'tags': [], 'client_ca_tls_container_ref': None, 'client_authentication': 'NONE', 'client_crl_container_ref': None, 'allowed_cidrs': None, 'tenant_id': 'a35bb21f6111427487d1b2351ffa78bf'}], 'pools': [{'members': [{'id': 'dd00157a-f14c-4c5e-b4fd-a61ee7d187cd', 'name': '', 'operating_status': 'NO_MONITOR', 'provisioning_status': 'PENDING_CREATE', 'admin_state_up': True, 'address': '10.2.1.215', 'protocol_port': 80, 'weight': 1, 'backup': False, 'subnet_id': '11232f98-bb12-401c-8702-04e40058c32c', 'project_id': 'a35bb21f6111427487d1b2351ffa78bf', 'created_at': '2021-05-27T10:41:09', 'updated_at': None, 'monitor_address': None, 'monitor_port': None, 'tags': [], 'tenant_id': 'a35bb21f6111427487d1b2351ffa78bf'}], 'healthmonitor': None, 'id': 'a19021a0-1fbd-4bbb-a157-e1544f96ef98', 'name': 's_rally_bd5a7d75_n6yLAHrg', 'description': '', 'provisioning_status': 'PENDING_CREATE', 'operating_status': 'OFFLINE', 'admin_state_up': True, 'protocol': 'HTTP', 'lb_algorithm': 'ROUND_ROBIN', 'session_persistence': None, 'project_id': 'a35bb21f6111427487d1b2351ffa78bf', 'listeners': [{'id': '0c43ff0a-0f49-45db-b1ec-7941b8241271'}], 'created_at': '2021-05-27T10:41:09', 'updated_at': None, 'tags': [], 'tls_container_ref': None, 'ca_tls_container_ref': None, 'crl_container_ref': None, 'tls_enabled': False, 'tenant_id': 'a35bb21f6111427487d1b2351ffa78bf'}], 'id': '47f27c55-c943-42b8-aff9-ba66e038546e', 'name': 's_rally_bd5a7d75_B3i5ge52', 'description': '', 'provisioning_status': 'PENDING_CREATE', 'operating_status': 'OFFLINE', 'admin_state_up': True, 'project_id': 'a35bb21f6111427487d1b2351ffa78bf', 'created_at': '2021-05-27T10:41:08', 'updated_at': None, 'vip_address': '10.0.2.163', 'vip_port_id': '36bbbd36-ab76-4a98-a8fa-aad705c34b63', 'vip_subnet_id': '4632fb1f-f322-47ed-bf2b-0d23bea49361', 'vip_network_id': '97728c6d-8dd9-4f2b-afc9-b3ea9dad2baa', 'provider': 'amphora', 'flavor_id': None, 'vip_qos_policy_id': None, 'tags': [], 'tenant_id': 'a35bb21f6111427487d1b2351ffa78bf'} is active^[[00m

LB2 is
2021-05-27 10:42:26.376 524357 INFO octavia-fully-populated-loadbalancer [-] Loadbalancer {'listeners': [{'l7policies': [], 'id': 'ae2c5a8f-d5db-4ea8-8945-41b374eef850', 'name': 's_rally_bd5a7d75_qVdiWXcc', 'description': '', 'provisioning_status': 'PENDING_CREATE', 'operating_status': 'OFFLINE', 'admin_state_up': True, 'protocol': 'HTTP', 'protocol_port': 80, 'connection_limit': -1, 'default_tls_container_ref': None, 'sni_container_refs': [], 'project_id': 'a35bb21f6111427487d1b2351ffa78bf', 'default_pool_id': 'ce6bb738-dd7d-4325-a368-9d0f5c6ba2a7', 'insert_headers': {}, 'created_at': '2021-05-27T10:41:10', 'updated_at': '2021-05-27T10:41:10', 'timeout_client_data': 50000, 'timeout_member_connect': 5000, 'timeout_member_data': 50000, 'timeout_tcp_inspect': 0, 'tags': [], 'client_ca_tls_container_ref': None, 'client_authentication': 'NONE', 'client_crl_container_ref': None, 'allowed_cidrs': None, 'tenant_id': 'a35bb21f6111427487d1b2351ffa78bf'}], 'pools': [{'members': [{'id': '6577dd54-c133-4f74-9bfc-c617bae7693f', 'name': '', 'operating_status': 'NO_MONITOR', 'provisioning_status': 'PENDING_CREATE', 'admin_state_up': True, 'address': '10.2.2.102', 'protocol_port': 80, 'weight': 1, 'backup': False, 'subnet_id': 'e73be832-251c-4ba8-83ba-cd5827bccfb2', 'project_id': 'a35bb21f6111427487d1b2351ffa78bf', 'created_at': '2021-05-27T10:41:10', 'updated_at': None, 'monitor_address': None, 'monitor_port': None, 'tags': [], 'tenant_id': 'a35bb21f6111427487d1b2351ffa78bf'}], 'healthmonitor': None, 'id': 'ce6bb738-dd7d-4325-a368-9d0f5c6ba2a7', 'name': 's_rally_bd5a7d75_oqVzunbX', 'description': '', 'provisioning_status': 'PENDING_CREATE', 'operating_status': 'OFFLINE', 'admin_state_up': True, 'protocol': 'HTTP', 'lb_algorithm': 'ROUND_ROBIN', 'session_persistence': None, 'project_id': 'a35bb21f6111427487d1b2351ffa78bf', 'listeners': [{'id': 'ae2c5a8f-d5db-4ea8-8945-41b374eef850'}], 'created_at': '2021-05-27T10:41:10', 'updated_at': None, 'tags': [], 'tls_container_ref': None, 'ca_tls_container_ref': None, 'crl_container_ref': None, 'tls_enabled': False, 'tenant_id': 'a35bb21f6111427487d1b2351ffa78bf'}], 'id': 'efc66a90-8c3e-4c4f-a255-a9a75df9e264', 'name': 's_rally_bd5a7d75_uOZBiHxg', 'description': '', 'provisioning_status': 'PENDING_CREATE', 'operating_status': 'OFFLINE', 'admin_state_up': True, 'project_id': 'a35bb21f6111427487d1b2351ffa78bf', 'created_at': '2021-05-27T10:41:09', 'updated_at': None, 'vip_address': '10.0.0.159', 'vip_port_id': '0169927d-9f9a-408f-8df0-60bc322be8ee', 'vip_subnet_id': '4632fb1f-f322-47ed-bf2b-0d23bea49361', 'vip_network_id': '97728c6d-8dd9-4f2b-afc9-b3ea9dad2baa', 'provider': 'amphora', 'flavor_id': None, 'vip_qos_policy_id': None, 'tags': [], 'tenant_id': 'a35bb21f6111427487d1b2351ffa78bf'} is active

But server list for both the LBs
| 0646f21c-2d8a-4d6d-a3a6-d421a028046d | amphora-69fb5082-3dfc-48f0-8b98-4e9c80085f32 | ACTIVE | None       | Running     | lb-mgmt-net=172.24.2.160; private=10.0.1.186; s_rally_bd5a7d75_aNEBc3HV=10.2.1.10  | octavia-amphora-16.1-20210430.3.x86_64 | 4bf5a53e-9a20-4591-811e-2489a4f80a86 |             |           | nova              | compute-0.redhat.local |            |
| c11fc214-6b33-4ba0-a460-c7c54afb1d63 | amphora-abaeb36c-45af-45b5-bdf5-1c5fe0096d77 | ACTIVE | None       | Running     | lb-mgmt-net=172.24.2.249; private=10.0.0.4; s_rally_bd5a7d75_aNEBc3HV=10.2.1.178   | octavia-amphora-16.1-20210430.3.x86_64 | 4bf5a53e-9a20-4591-811e-2489a4f80a86 |             |           | nova              | compute-0.redhat.local |

If you notice above, 10.2.1.10 and 10.2.1.178 belong to same subnet through LB2 listener’s member is on 10.2.2.0/24 network i.e b4afb30d-0ae0-4073-8797-2e13b21eae90.

In the neutron log we can see that nova is requesting the port creation using network-id and neutron always choses first subnet for the port’s IP address.
[root@controller-0 ~]# grep -inr "'network_id': 'b4afb30d-0ae0-4073-8797-2e13b21eae90', 'admin_state_up': True, 'tenant_id':" /var/log/containers/neutron/
/var/log/containers/neutron/server.log:28572:2021-05-27 10:41:55.683 29 DEBUG neutron.api.v2.base [req-40c5bd1a-5596-4444-9d44-f389903d0ebd 42e1d3f04c554636a6341f4945e32e88 976b0212bc8743b486d678d28912463d - default default] Request body: {'port': {'device_id': 'c11fc214-6b33-4ba0-a460-c7c54afb1d63', 'network_id': 'b4afb30d-0ae0-4073-8797-2e13b21eae90', 'admin_state_up': True, 'tenant_id': '976b0212bc8743b486d678d28912463d'}} prepare_request_body /usr/lib/python3.6/site-packages/neutron/api/v2/base.py:719
/var/log/containers/neutron/server.log:28677:2021-05-27 10:41:57.418 32 DEBUG neutron.api.v2.base [req-2cc5775e-1c93-4b05-b896-e7270ebe86c8 42e1d3f04c554636a6341f4945e32e88 976b0212bc8743b486d678d28912463d - default default] Request body: {'port': {'device_id': '0646f21c-2d8a-4d6d-a3a6-d421a028046d', 'network_id': 'b4afb30d-0ae0-4073-8797-2e13b21eae90', 'admin_state_up': True, 'tenant_id': '976b0212bc8743b486d678d28912463d'}} prepare_request_body /usr/lib/python3.6/site-packages/neutron/api/v2/base.py:719

Nova log with corresponding ports
2021-05-27 10:41:58.093 7 DEBUG nova.network.neutronv2.api [req-f7ebd060-d5f9-465a-8f9e-ae089a46e9eb 42e1d3f04c554636a6341f4945e32e88 976b0212bc8743b486d678d28912463d - default default] [instance: 0646f21c-2d8a-4d6d-a3a6-d421a028046d] Successfully created port: 0913d1d3-8e53-4f47-9631-7a18453cc0b3 _create_port_minimal /usr/lib/python3.6/site-packages/nova/network/neutronv2/api.py:477
2021-05-27 10:41:56.295 7 DEBUG nova.network.neutronv2.api [req-5fefaa59-8dbf-4314-a416-8f44cab63d4b 42e1d3f04c554636a6341f4945e32e88 976b0212bc8743b486d678d28912463d - default default] [instance: c11fc214-6b33-4ba0-a460-c7c54afb1d63] Successfully created port: 3aafbc0d-9621-41d9-b888-8a0394e8d142 _create_port_minimal /usr/lib/python3.6/site-packages/nova/network/neutronv2/api.py:477

(overcloud) [stack@undercloud ~]$ openstack network show b4afb30d-0ae0-4073-8797-2e13b21eae90
+---------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                     | Value                                                                                                                                            |
+---------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up            | UP                                                                                                                                               |
| availability_zone_hints   |                                                                                                                                                  |
| availability_zones        |                                                                                                                                                  |
| created_at                | 2021-05-27T10:40:50Z                                                                                                                             |
| description               |                                                                                                                                                  |
| dns_domain                |                                                                                                                                                  |
| id                        | b4afb30d-0ae0-4073-8797-2e13b21eae90                                                                                                             |
| ipv4_address_scope        | None                                                                                                                                             |
| ipv6_address_scope        | None                                                                                                                                             |
| is_default                | None                                                                                                                                             |
| is_vlan_transparent       | None                                                                                                                                             |
| location                  | cloud='', project.domain_id=, project.domain_name=, project.id='a35bb21f6111427487d1b2351ffa78bf', project.name=, region_name='regionOne', zone= |
| mtu                       | 1442                                                                                                                                             |
| name                      | s_rally_bd5a7d75_aNEBc3HV                                                                                                                        |
| port_security_enabled     | True                                                                                                                                             |
| project_id                | a35bb21f6111427487d1b2351ffa78bf                                                                                                                 |
| provider:network_type     | geneve                                                                                                                                           |
| provider:physical_network | None                                                                                                                                             |
| provider:segmentation_id  | 11                                                                                                                                               |
| qos_policy_id             | None                                                                                                                                             |
| revision_number           | 3                                                                                                                                                |
| router:external           | Internal                                                                                                                                         |
| segments                  | None                                                                                                                                             |
| shared                    | False                                                                                                                                            |
| status                    | ACTIVE                                                                                                                                           |
| subnets                   | 11232f98-bb12-401c-8702-04e40058c32c, e73be832-251c-4ba8-83ba-cd5827bccfb2                                                                       |
| tags                      |                                                                                                                                                  |
| updated_at                | 2021-05-27T10:40:52Z                                                                                                                             |
+---------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+

(overcloud) [stack@undercloud ~]$ openstack port show 0913d1d3-8e53-4f47-9631-7a18453cc0b3
+-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                                                            |
+-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                                                               |
| allowed_address_pairs   |                                                                                                                                                  |
| binding_host_id         | compute-0.redhat.local                                                                                                                           |
| binding_profile         |                                                                                                                                                  |
| binding_vif_details     | port_filter='True'                                                                                                                               |
| binding_vif_type        | ovs                                                                                                                                              |
| binding_vnic_type       | normal                                                                                                                                           |
| created_at              | 2021-05-27T10:41:57Z                                                                                                                             |
| data_plane_status       | None                                                                                                                                             |
| description             |                                                                                                                                                  |
| device_id               | 0646f21c-2d8a-4d6d-a3a6-d421a028046d                                                                                                             |
| device_owner            | compute:nova                                                                                                                                     |
| dns_assignment          | fqdn='host-10-2-1-10.openstacklocal.', hostname='host-10-2-1-10', ip_address='10.2.1.10'                                                         |
| dns_domain              | None                                                                                                                                             |
| dns_name                |                                                                                                                                                  |
| extra_dhcp_opts         |                                                                                                                                                  |
| fixed_ips               | ip_address='10.2.1.10', subnet_id='11232f98-bb12-401c-8702-04e40058c32c'                                                                         |
| id                      | 0913d1d3-8e53-4f47-9631-7a18453cc0b3                                                                                                             |
| location                | cloud='', project.domain_id=, project.domain_name=, project.id='976b0212bc8743b486d678d28912463d', project.name=, region_name='regionOne', zone= |
| mac_address             | fa:16:3e:a6:ec:e0                                                                                                                                |
| name                    |                                                                                                                                                  |
| network_id              | b4afb30d-0ae0-4073-8797-2e13b21eae90                                                                                                             |
| port_security_enabled   | True                                                                                                                                             |
| project_id              | 976b0212bc8743b486d678d28912463d                                                                                                                 |
| propagate_uplink_status | None                                                                                                                                             |
| qos_policy_id           | None                                                                                                                                             |
| resource_request        | None                                                                                                                                             |
| revision_number         | 4                                                                                                                                                |
| security_group_ids      | 948a1b2e-c6ca-4ea1-8ab8-916e4a107e82                                                                                                             |
| status                  | ACTIVE                                                                                                                                           |
| tags                    |                                                                                                                                                  |
| trunk_details           | None                                                                                                                                             |
| updated_at              | 2021-05-27T10:42:00Z                                                                                                                             |
+-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+

(overcloud) [stack@undercloud ~]$ openstack port show 3aafbc0d-9621-41d9-b888-8a0394e8d142
+-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                                                            |
+-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                                                               |
| allowed_address_pairs   |                                                                                                                                                  |
| binding_host_id         | compute-0.redhat.local                                                                                                                           |
| binding_profile         |                                                                                                                                                  |
| binding_vif_details     | port_filter='True'                                                                                                                               |
| binding_vif_type        | ovs                                                                                                                                              |
| binding_vnic_type       | normal                                                                                                                                           |
| created_at              | 2021-05-27T10:41:55Z                                                                                                                             |
| data_plane_status       | None                                                                                                                                             |
| description             |                                                                                                                                                  |
| device_id               | c11fc214-6b33-4ba0-a460-c7c54afb1d63                                                                                                             |
| device_owner            | compute:nova                                                                                                                                     |
| dns_assignment          | fqdn='host-10-2-1-178.openstacklocal.', hostname='host-10-2-1-178', ip_address='10.2.1.178'                                                      |
| dns_domain              | None                                                                                                                                             |
| dns_name                |                                                                                                                                                  |
| extra_dhcp_opts         |                                                                                                                                                  |
| fixed_ips               | ip_address='10.2.1.178', subnet_id='11232f98-bb12-401c-8702-04e40058c32c'                                                                        |
| id                      | 3aafbc0d-9621-41d9-b888-8a0394e8d142                                                                                                             |
| location                | cloud='', project.domain_id=, project.domain_name=, project.id='976b0212bc8743b486d678d28912463d', project.name=, region_name='regionOne', zone= |
| mac_address             | fa:16:3e:06:2b:45                                                                                                                                |
| name                    |                                                                                                                                                  |
| network_id              | b4afb30d-0ae0-4073-8797-2e13b21eae90                                                                                                             |
| port_security_enabled   | True                                                                                                                                             |
| project_id              | 976b0212bc8743b486d678d28912463d                                                                                                                 |
| propagate_uplink_status | None                                                                                                                                             |
| qos_policy_id           | None                                                                                                                                             |
| resource_request        | None                                                                                                                                             |
| revision_number         | 4                                                                                                                                                |
| security_group_ids      | 948a1b2e-c6ca-4ea1-8ab8-916e4a107e82                                                                                                             |
| status                  | ACTIVE                                                                                                                                           |
| tags                    |                                                                                                                                                  |
| trunk_details           | None                                                                                                                                             |
| updated_at              | 2021-05-27T10:41:58Z                                                                                                                             |
+-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+

As Octavia is accepting address and subnet_id as input from user for pool member, it should create haproxy ports on the same subnet rather than on other subnets.
           pool_args = {
                "name": pool_name,
                "protocol": protocol,
                "lb_algorithm": "ROUND_ROBIN",
                "members": [
                    {
                        "address": mem_addr,
                        "subnet_id": subnet_id,
                        "protocol_port": 80
                    }
                ]
            }

Comment 2 anil venkata 2021-06-04 08:00:48 UTC
Octavia can create port on the subnet like below and then request nova to use this port for the VM
            port_create_args["fixed_ips"] = [{'subnet_id': subnet["subnet"]["id"]}]
            port_create_args["network_id"] = network["network"]["id"]
            port = self._create_port(network, port_create_args)
            kwargs["nics"].append({'port-id': port['port']['id']})
self._boot_server(image, flavor, key_name=self.context["user"]["keypair"]["name"], **kwargs)

We are using this in our testing to have ip address on the specific subnet inside the VM.

Comment 3 Michael Johnson 2021-06-04 15:16:19 UTC
Anil,

Octavia actually manages ports much better than you described in comment 2. It allows for users to specify members on any network they have access to, dynamically, at any time during the lifecycle of a load balancer.

So, maybe I don't understand your comment.

Comment 4 Michael Johnson 2021-06-04 15:19:11 UTC
Can you provide the command lines you used when creating each member on the two load balancers?
Also, please provide the "openstack loadbalancer member show" output for each member.

Comment 5 anil venkata 2021-06-04 15:36:54 UTC
Created attachment 1789005 [details]
rally test case

Comment 6 Michael Johnson 2021-06-04 16:02:38 UTC
That rally code snippet does not provide the required information.

Can you provide the sosreport for the environment?

We need to see the networks, subnets, nova instances, and member configurations.

Comment 7 anil venkata 2021-06-08 08:54:18 UTC
Complete CLI commands with output at https://github.com/venkataanil/files/blob/master/bz1965308.txt

Logs are at http://perf1.perf.lab.eng.bos.redhat.com/pub/asyedham/BZ-1965308/

As you can see in above cli commands, we have created 2 load balancers lb-bz and lb-bz1. lb-bz's pool bzpool1 is added member 91.0.0.4 and subnet ae4d80e0-89c5-4d18-8474-97e79cd1ca26.
Similarly  lb-bz1's pool bz1pool2 is added member 92.0.0.5 and subnet 45239900-0f4c-48ab-8a4d-d4c517729e80
But bzpool1 amphora haproxy is having ip address on 92.0.0.0/24 subnet instead of 91.0.0.0/24.

Comment 8 Gregory Thiemonge 2021-06-25 11:05:44 UTC
I reproduced a similar behavior:

$ openstack network create --internal member
$ openstack subnet create --subnet-range 10.1.0.0/24 --allocation-pool start=10.1.0.10,end=10.1.0.100 --network member member-subnet1
$ openstack subnet create --subnet-range 10.2.0.0/24 --allocation-pool start=10.2.0.10,end=10.2.0.100 --network member member-subnet2
$ openstack loadbalancer create --name lb1 --vip-subnet-id private-subnet
$ openstack loadbalancer create --name lb2 --vip-subnet-id private-subnet
$ openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1
$ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
$ openstack loadbalancer member create --address 10.1.0.9 --subnet-id member-subnet1 --protocol-port 8080 --name member1 pool1
$ openstack loadbalancer listener create --name listener2 --protocol HTTP --protocol-port 80 lb2
$ openstack loadbalancer pool create --name pool2 --lb-algorithm ROUND_ROBIN --listener listener2 --protocol HTTP
$ openstack loadbalancer member create --address 10.2.0.9 --subnet-id member-subnet2 --protocol-port 8080 --name member2 pool2


in lb1:

in /var/lib/octavia/749375d4-45b5-4217-8111-ec5b4a972a5d/haproxy.cfg

server 73d599c7-ba1a-4a52-94ac-c329abaa35e3 10.1.0.9:8080 weight 1

bash-4.4# ip -n amphora-haproxy a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:d9:38:9e brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.56/26 brd 10.0.0.63 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.0.0.22/26 brd 10.0.0.63 scope global secondary eth1:0
       valid_lft forever preferred_lft forever
    inet6 fd49:970f:4bc6:0:f816:3eff:fed9:389e/64 scope global dynamic mngtmpaddr 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:17:39:ee brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.16/24 brd 10.1.0.255 scope global eth2
       valid_lft forever preferred_lft forever


in lb2:

/var/lib/octavia/c69eaa8e-4355-4124-abb3-80f42c6001dc/haproxy.cfg:

server d4d64404-5cdf-44fd-8f17-44d5e484a16a 10.2.0.9:8080 weight 

bash-4.4# ip -n amphora-haproxy a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:ac:e4:1e brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.6/26 brd 10.0.0.63 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.0.0.49/26 brd 10.0.0.63 scope global secondary eth1:0
       valid_lft forever preferred_lft forever
    inet6 fd49:970f:4bc6:0:f816:3eff:feac:e41e/64 scope global dynamic mngtmpaddr 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:04:78:94 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.100/24 brd 10.1.0.255 scope global eth2
       valid_lft forever preferred_lft forever

bash-4.4# ip -n amphora-haproxy r
default via 10.0.0.1 dev eth1 
10.0.0.0/26 dev eth1 proto kernel scope link src 10.0.0.6 
10.1.0.0/24 dev eth2 proto kernel scope link src 10.1.0.100 


in lb2, eth2 is plugged on the wrong subnet, there is also no route to the member.

Comment 21 Omer Schwartz 2023-02-06 11:57:14 UTC
On a host with puddle RHOS-17.1-RHEL-9-20230131.n.2

I created the Octavia resources using the same steps mentioned in comment #8 https://bugzilla.redhat.com/show_bug.cgi?id=1965308#c8

$ openstack network create --internal member
$ openstack subnet create --subnet-range 10.1.0.0/24 --allocation-pool start=10.1.0.10,end=10.1.0.100 --network member member-subnet1
$ openstack subnet create --subnet-range 10.2.0.0/24 --allocation-pool start=10.2.0.10,end=10.2.0.100 --network member member-subnet2
$ openstack loadbalancer create --name lb1 --vip-subnet-id private-subnet
$ openstack loadbalancer create --name lb2 --vip-subnet-id private-subnet
$ openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1
$ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
$ openstack loadbalancer member create --address 10.1.0.9 --subnet-id member-subnet1 --protocol-port 8080 --name member1 pool1
$ openstack loadbalancer listener create --name listener2 --protocol HTTP --protocol-port 80 lb2
$ openstack loadbalancer pool create --name pool2 --lb-algorithm ROUND_ROBIN --listener listener2 --protocol HTTP
$ openstack loadbalancer member create --address 10.2.0.9 --subnet-id member-subnet2 --protocol-port 8080 --name member2 pool2



LB1:

[root@amphora-131469d3-d779-4bd7-8c9b-1ab1ad6ab85b ~]# cat /var/lib/octavia/f46f8b8a-d75c-487f-9bab-5b036a4600ca/haproxy.cfg | grep weight
    server d58f9f63-d6c4-4d69-8e1f-22ec84365b58 10.1.0.9:8080 weight 1

I.e. member address is 10.1.0.9


[root@amphora-131469d3-d779-4bd7-8c9b-1ab1ad6ab85b ~]# ip -n amphora-haproxy a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:07:b2:f7 brd ff:ff:ff:ff:ff:ff
    altname enp7s0
    inet 10.0.64.33/26 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.0.64.8/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fd47:e41c:f56e:0:f816:3eff:fe07:b2f7/64 scope global dynamic mngtmpaddr 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:c9:71:31 brd ff:ff:ff:ff:ff:ff
    altname enp8s0
    inet 10.1.0.94/24 scope global eth2
       valid_lft forever preferred_lft forever

# eth2 is plugged to the correct subnet on LB1's amphora


[root@amphora-131469d3-d779-4bd7-8c9b-1ab1ad6ab85b ~]# ip -n amphora-haproxy r
default via 10.0.64.1 dev eth1 proto static onlink 
10.0.64.0/26 dev eth1 proto kernel scope link src 10.0.64.33 
10.1.0.0/24 dev eth2 proto kernel scope link src 10.1.0.94

# eth2 has route to the member on LB1's amphora




LB2:

[root@amphora-6b4e40d7-f0a8-413e-b455-ee6789319956 ~]# cat /var/lib/octavia/2a65870f-2df2-4686-81d7-c1f91ad8d689/haproxy.cfg | grep weight
    server 0aaf1ce9-64cb-4b6d-93b5-5ba8219cc9c2 10.2.0.9:8080 weight 1

I.e. member address is 10.2.0.9

[root@amphora-6b4e40d7-f0a8-413e-b455-ee6789319956 ~]# ip -n amphora-haproxy a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:46:7c:ca brd ff:ff:ff:ff:ff:ff
    altname enp7s0
    inet 10.0.64.57/26 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.0.64.37/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fd47:e41c:f56e:0:f816:3eff:fe46:7cca/64 scope global dynamic mngtmpaddr 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:8e:10:23 brd ff:ff:ff:ff:ff:ff
    altname enp8s0
    inet 10.2.0.49/24 scope global eth2
       valid_lft forever preferred_lft forever

# eth2 is plugged to the correct subnet on LB2's amphora

[root@amphora-6b4e40d7-f0a8-413e-b455-ee6789319956 ~]# ip -n amphora-haproxy r
default via 10.0.64.1 dev eth1 proto static onlink 
10.0.64.0/26 dev eth1 proto kernel scope link src 10.0.64.57 
10.2.0.0/24 dev eth2 proto kernel scope link src 10.2.0.49

# eth2 has route to the member on LB2's amphora


Looks good to me, I am moving the BZ status to VERIFIED

Comment 31 errata-xmlrpc 2023-08-16 01:10:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 17.1 (Wallaby)), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2023:4577