Description of problem: I have deployed OSP13 with ODL and Octavia following our documentation. While Octavia based LB works fine over VIP network there is a problem to reach it over floating IP. I have checked flows on the compute node (have only one) where amphora and regular instances are hosted - both with floating IPs assigned and I can't find relevant flows configured for amphora. In the example bellow I've created two webnodes which are members of LB's pool. Each of them also have floating IP assigned. (overcloud) [stack@director ~]$ openstack server list --all-projects --long -c ID -c Name -c Networks -c Host +--------------------------------------+----------------------------------------------+-----------------------------------------------------+---------------------------------------------+ | ID | Name | Networks | Host | +--------------------------------------+----------------------------------------------+-----------------------------------------------------+---------------------------------------------+ | 1545a156-7c3f-4854-b225-9faa2af08a8d | amphora-46dad6f5-5c3f-4ffd-a66e-7b3425876b20 | lb-mgmt-net=172.24.0.10; net-internal=192.168.5.107 | overcloud-compute-0.openstack.lab.rhpoc.net | | 52e7a437-8f1c-4f7b-8df4-5adb6f12d685 | webnode-1 | net-internal=192.168.5.108, 192.168.122.121 | overcloud-compute-0.openstack.lab.rhpoc.net | | a8ed55ff-61a8-4c54-9c8f-643c7ea05706 | webnode-2 | net-internal=192.168.5.104, 192.168.122.115 | overcloud-compute-0.openstack.lab.rhpoc.net | +--------------------------------------+----------------------------------------------+-----------------------------------------------------+---------------------------------------------+ LoadBalancer reports different VIP bellow comparing to what is assigned to amphora instance above (192.168.5.107 vs 192.168.5.111) but I assume that's right: (overcloud) [stack@director ~]$ openstack loadbalancer list +--------------------------------------+-----------------+----------------------------------+---------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+-----------------+----------------------------------+---------------+---------------------+----------+ | ab4a7bd8-7d7d-43f1-8479-ed1e2a64a4dc | Load Balancer 1 | 0ba3e42690574b099d5791856332b9a3 | 192.168.5.111 | ACTIVE | octavia | +--------------------------------------+-----------------+----------------------------------+---------------+---------------------+----------+ (overcloud) [stack@director ~]$ openstack loadbalancer show "Load Balancer 1" +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2018-07-25T09:28:11 | | description | | | flavor | | | id | ab4a7bd8-7d7d-43f1-8479-ed1e2a64a4dc | | listeners | d0e7dd79-f17d-4a0d-9e64-845d69b23c61 | | name | Load Balancer 1 | | operating_status | ONLINE | | pools | ac4a51c7-be3d-40cf-a728-e65e95a78ac5 | | project_id | 0ba3e42690574b099d5791856332b9a3 | | provider | octavia | | provisioning_status | ACTIVE | | updated_at | 2018-07-25T09:29:23 | | vip_address | 192.168.5.111 | | vip_network_id | 0ce702be-f2fc-418b-924e-4dace657ccd7 | | vip_port_id | 4d1eddac-c93a-4103-928e-37ebf29424dc | | vip_qos_policy_id | None | | vip_subnet_id | 5cdddd34-84c4-4187-8f39-9f79d2f94abb | +---------------------+--------------------------------------+ FloatingIP (192.168.122.122) is mapped to VIP fixed IP address (192.168.5.111) what also seems to be legit: (overcloud) [stack@director ~]$ openstack floating ip list --long +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+--------------------------------------+--------+-------------+ | ID | Floating IP Address | Fixed IP Address | Port | Floating Network | Project | Router | Status | Description | +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+--------------------------------------+--------+-------------+ | 67b29c94-21ee-4dd2-ad27-474f93cf24f7 | 192.168.122.115 | 192.168.5.104 | 5b2a0589-1f42-411e-81a7-277dc9a6e080 | 9da81c7f-0224-4913-99be-d7109b7cac14 | 0ba3e42690574b099d5791856332b9a3 | b4ab5fea-5b51-4c09-8a1f-1eb5a6f7f10b | ACTIVE | | | d5cadb47-8afe-42c7-a9e9-0178c4c41319 | 192.168.122.122 | 192.168.5.111 | 4d1eddac-c93a-4103-928e-37ebf29424dc | 9da81c7f-0224-4913-99be-d7109b7cac14 | 0ba3e42690574b099d5791856332b9a3 | b4ab5fea-5b51-4c09-8a1f-1eb5a6f7f10b | ACTIVE | | | ea015677-2270-4c6e-a5c1-8da92467e154 | 192.168.122.121 | 192.168.5.108 | 34256e32-a881-4e4e-ab40-7610f9578890 | 9da81c7f-0224-4913-99be-d7109b7cac14 | 0ba3e42690574b099d5791856332b9a3 | b4ab5fea-5b51-4c09-8a1f-1eb5a6f7f10b | ACTIVE | | +--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+--------------------------------------+--------+-------------+ If I list ports of amphora VIP isn't listed: (overcloud) [stack@director ~]$ openstack port list --device-id 1545a156-7c3f-4854-b225-9faa2af08a8d --long +--------------------------------------+------------------------------------------------------+-------------------+------------------------------------------------------------------------------+--------+--------------------------------------+--------------+------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | Security Groups | Device Owner | Tags | +--------------------------------------+------------------------------------------------------+-------------------+------------------------------------------------------------------------------+--------+--------------------------------------+--------------+------+ | 3ace0989-860c-4eaa-b9fb-be7b1631be3b | | fa:16:3e:6d:d8:db | ip_address='172.24.0.10', subnet_id='7d766c84-056a-4486-b138-18be3cba8054' | ACTIVE | 8384bab1-eb6c-46e7-836d-b599b2e7a34e | compute:nova | | | 6b7eca2b-8c98-489a-971e-00494c178e3f | octavia-lb-vrrp-46dad6f5-5c3f-4ffd-a66e-7b3425876b20 | fa:16:3e:23:c0:48 | ip_address='192.168.5.107', subnet_id='5cdddd34-84c4-4187-8f39-9f79d2f94abb' | ACTIVE | e02831b3-e041-471c-b236-7afbb200748c | compute:nova | | +--------------------------------------+------------------------------------------------------+-------------------+------------------------------------------------------------------------------+--------+--------------------------------------+--------------+------+ instead VIP is configured on unbounded port: (overcloud) [stack@director ~]$ openstack port show 4d1eddac-c93a-4103-928e-37ebf29424dc +-----------------------+------------------------------------------------------------------------------+ | Field | Value | +-----------------------+------------------------------------------------------------------------------+ | admin_state_up | DOWN | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2018-07-25T09:28:11Z | | data_plane_status | None | | description | | | device_id | lb-ab4a7bd8-7d7d-43f1-8479-ed1e2a64a4dc | | device_owner | Octavia | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='192.168.5.111', subnet_id='5cdddd34-84c4-4187-8f39-9f79d2f94abb' | | id | 4d1eddac-c93a-4103-928e-37ebf29424dc | | ip_address | None | | mac_address | fa:16:3e:9d:7d:00 | | name | octavia-lb-ab4a7bd8-7d7d-43f1-8479-ed1e2a64a4dc | | network_id | 0ce702be-f2fc-418b-924e-4dace657ccd7 | | option_name | None | | option_value | None | | port_security_enabled | True | | project_id | 0ba3e42690574b099d5791856332b9a3 | | qos_policy_id | None | | revision_number | 8 | | security_group_ids | e02831b3-e041-471c-b236-7afbb200748c | | status | DOWN | | subnet_id | None | | tags | | | trunk_details | None | | updated_at | 2018-07-25T09:28:34Z | +-----------------------+------------------------------------------------------------------------------+ And here is something what really bothers me. If we check flows configured on the compute node for the IPs of the regular instances (192.168.122.121 and 192.168.122.115) they're there: [root@overcloud-compute-0 heat-admin]# ovs-appctl bridge/dump-flows br-int | grep 192.168.122.121 table_id=21, duration=2238s, n_packets=6676, n_bytes=87564191, priority=42,ip,metadata=0x30d42/0xfffffe,nw_dst=192.168.122.121,actions=set_field:fa:16:3e:19:00:95->eth_dst,goto_table:25 table_id=25, duration=2237s, n_packets=6676, n_bytes=87564191, priority=10,ip,dl_dst=fa:16:3e:19:00:95,nw_dst=192.168.122.121,actions=set_field:192.168.5.108->ip_dst,write_metadata:0x30d46/0xfffffe,goto_table:27 table_id=26, duration=2237s, n_packets=4069, n_bytes=307927, priority=10,ip,metadata=0x30d46/0xfffffe,nw_src=192.168.5.108,actions=set_field:192.168.122.121->ip_src,write_metadata:0x30d42/0xfffffe,goto_table:28 table_id=28, duration=2237s, n_packets=4069, n_bytes=307927, priority=10,ip,metadata=0x30d42/0xfffffe,nw_src=192.168.122.121,actions=set_field:fa:16:3e:19:00:95->eth_src,resubmit(,21) table_id=81, duration=2237s, n_packets=39, n_bytes=1794, priority=100,arp,metadata=0x3346138a000000/0xfffffffff000000,arp_tpa=192.168.122.121,arp_op=1,actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:19:00:95->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xfa163e190095->NXM_NX_ARP_SHA[],load:0xc0a87a79->NXM_OF_ARP_SPA[],load:0->NXM_OF_IN_PORT[],load:0x334600->NXM_NX_REG6[],write_metadata:0/0x1,goto_table:220 [root@overcloud-compute-0 heat-admin]# ovs-appctl bridge/dump-flows br-int | grep 192.168.122.115 table_id=21, duration=2218s, n_packets=398, n_bytes=2314317, priority=42,ip,metadata=0x30d42/0xfffffe,nw_dst=192.168.122.115,actions=set_field:fa:16:3e:89:a2:6e->eth_dst,goto_table:25 table_id=25, duration=2218s, n_packets=398, n_bytes=2314317, priority=10,ip,dl_dst=fa:16:3e:89:a2:6e,nw_dst=192.168.122.115,actions=set_field:192.168.5.104->ip_dst,write_metadata:0x30d46/0xfffffe,goto_table:27 table_id=26, duration=2218s, n_packets=380, n_bytes=34043, priority=10,ip,metadata=0x30d46/0xfffffe,nw_src=192.168.5.104,actions=set_field:192.168.122.115->ip_src,write_metadata:0x30d42/0xfffffe,goto_table:28 table_id=28, duration=2218s, n_packets=380, n_bytes=34043, priority=10,ip,metadata=0x30d42/0xfffffe,nw_src=192.168.122.115,actions=set_field:fa:16:3e:89:a2:6e->eth_src,resubmit(,21) table_id=81, duration=2218s, n_packets=36, n_bytes=1656, priority=100,arp,metadata=0x3346138a000000/0xfffffffff000000,arp_tpa=192.168.122.115,arp_op=1,actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:89:a2:6e->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xfa163e89a26e->NXM_NX_ARP_SHA[],load:0xc0a87a73->NXM_OF_ARP_SPA[],load:0->NXM_OF_IN_PORT[],load:0x334600->NXM_NX_REG6[],write_metadata:0/0x1,goto_table:220 but there is no trace of LoadBalancer's floating IP: [root@overcloud-compute-0 heat-admin]# ovs-appctl bridge/dump-flows br-int | grep 192.168.122.122 [root@overcloud-compute-0 heat-admin]# This is the same situation for the unbounded port created manually: (overcloud) [stack@director ~]$ openstack port create --network net-internal my-test-port +-----------------------+------------------------------------------------------------------------------+ | Field | Value | +-----------------------+------------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2018-07-25T10:14:58Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='192.168.5.106', subnet_id='5cdddd34-84c4-4187-8f39-9f79d2f94abb' | | id | 36d50366-e654-433b-a70a-ff061f5fd80f | | ip_address | None | | mac_address | fa:16:3e:67:34:e4 | | name | my-test-port | | network_id | 0ce702be-f2fc-418b-924e-4dace657ccd7 | | option_name | None | | option_value | None | | port_security_enabled | True | | project_id | 0ba3e42690574b099d5791856332b9a3 | | qos_policy_id | None | | revision_number | 6 | | security_group_ids | f1c89a3b-14ad-4bd8-9692-fa9cd4e2e4fd | | status | DOWN | | subnet_id | None | | tags | | | trunk_details | None | | updated_at | 2018-07-25T10:14:59Z | +-----------------------+------------------------------------------------------------------------------+ (overcloud) [stack@director ~]$ openstack floating ip create net-external +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2018-07-25T10:15:49Z | | description | | | fixed_ip_address | None | | floating_ip_address | 192.168.122.118 | | floating_network_id | 9da81c7f-0224-4913-99be-d7109b7cac14 | | id | fbaf172c-c11b-4c05-a40d-b679e41f6512 | | name | 192.168.122.118 | | port_id | None | | project_id | 0ba3e42690574b099d5791856332b9a3 | | qos_policy_id | None | | revision_number | 0 | | router_id | None | | status | DOWN | | subnet_id | None | | updated_at | 2018-07-25T10:15:49Z | +---------------------+--------------------------------------+ (overcloud) [stack@director ~]$ openstack floating ip set --port my-test-port 192.168.122.118 [root@overcloud-compute-0 heat-admin]# ovs-appctl bridge/dump-flows br-int | grep 192.168.122.118 [root@overcloud-compute-0 heat-admin]# As soon as I attach the port to the instance: (overcloud) [stack@director ~]$ openstack server add port webnode-1 my-test-port the expected flows are showing up on the compute node: [root@overcloud-compute-0 heat-admin]# ovs-appctl bridge/dump-flows br-int | grep 192.168.122.118 table_id=21, duration=2s, n_packets=0, n_bytes=0, priority=42,ip,metadata=0x30d42/0xfffffe,nw_dst=192.168.122.118,actions=set_field:fa:16:3e:d8:4e:7f->eth_dst,goto_table:25 table_id=25, duration=1s, n_packets=0, n_bytes=0, priority=10,ip,dl_dst=fa:16:3e:d8:4e:7f,nw_dst=192.168.122.118,actions=set_field:192.168.5.106->ip_dst,write_metadata:0x30d46/0xfffffe,goto_table:27 table_id=26, duration=1s, n_packets=0, n_bytes=0, priority=10,ip,metadata=0x30d46/0xfffffe,nw_src=192.168.5.106,actions=set_field:192.168.122.118->ip_src,write_metadata:0x30d42/0xfffffe,goto_table:28 table_id=28, duration=1s, n_packets=0, n_bytes=0, priority=10,ip,metadata=0x30d42/0xfffffe,nw_src=192.168.122.118,actions=set_field:fa:16:3e:d8:4e:7f->eth_src,resubmit(,21) table_id=81, duration=1s, n_packets=0, n_bytes=0, priority=100,arp,metadata=0x3346138a000000/0xfffffffff000000,arp_tpa=192.168.122.118,arp_op=1,actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:d8:4e:7f->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xfa163ed84e7f->NXM_NX_ARP_SHA[],load:0xc0a87a76->NXM_OF_ARP_SPA[],load:0->NXM_OF_IN_PORT[],load:0x334600->NXM_NX_REG6[],write_metadata:0/0x1,goto_table:220 I tried to add VIP port to amphora instance manually but it didn't work. (overcloud) [stack@director ~]$ openstack server add port 1545a156-7c3f-4854-b225-9faa2af08a8d 4d1eddac-c93a-4103-928e-37ebf29424dc Port 4d1eddac-c93a-4103-928e-37ebf29424dc not usable for instance 1545a156-7c3f-4854-b225-9faa2af08a8d. (HTTP 400) (Request-ID: req-b02fd5c1-66ab-4f9c-b95d-8576c2d3305f) Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Analysis: An Octavia VIP is a neutron port that is not bound to any VM and is therefor not added to br-int. The VM containing the active haproxy sends gratuitous ARPs for the VIP's IP and ODL intercepts those and programs flows to forward VM traffic to the VMs port. Note that this is my understanding of how this all works, I have not validated it, the opener of this bug confirms that it works. The ODL code responsible for configuring the FIP association flows on OVS currently relies on a southbound openflow port that corresponds to the neutron FIP port. The only real reason this is required is so that ODL can decide which switch should get the flows. See FloatingIPListener#createNATFlowEntries. In the case of the VIP port, there is no corresponding southbound port so the flows never get configured. A possible solution, the details of which need to be worked out, is as follows. In our case, ODL can know which switch to program the flows on from the gratuitous ARP packet-in event which will come from the right switch (we already listen for those.) So, basically we just respond to the gratuitous ARP by correlating it with the neutron port, checking that the port is an Octavia VIP (the owner field), and programming the flows. There are two pieces of housekeeping that need to go along with this, (a) the gratuitous ARPs are continuously sent so we do not want to constantly reprogram the flows and (b) when the VM sending the gratuitous ARPs changes (failover) the old flows need to be erased.
https://git.opendaylight.org/gerrit/#/c/75248/ This patch is some infra prep for the fix in netvirt
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0093