The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 1825483 - health check for load balance doesn't work if ip is not set for logical switch port
Summary: health check for load balance doesn't work if ip is not set for logical switc...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: ovn2.11
Version: FDP 20.A
Hardware: Unspecified
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Numan Siddique
QA Contact: ying xu
URL:
Whiteboard:
Depends On: 1801058
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-18 13:10 UTC by Numan Siddique
Modified: 2020-05-26 14:08 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1801058
Environment:
Last Closed: 2020-05-26 14:07:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2318 0 None None None 2020-05-26 14:08:02 UTC

Description Numan Siddique 2020-04-18 13:10:21 UTC
+++ This bug was initially created as a clone of Bug #1801058 +++

Description of problem:
health check for load balance doesn't work if ip is not set for logical switch port

Version-Release number of selected component (if applicable):
ovn2.12.0-27

How reproducible:
Always

Steps to Reproduce:
#!/bin/bash                            
                                                                
systemctl start openvswitch                             
systemctl start ovn-northd                                          
                                                       
ovn-nbctl set-connection ptcp:6641      
ovn-sbctl set-connection ptcp:6642                             
                                                        
ovs-vsctl set open . external-ids:system-id=hv0 external-ids:ovn-remote=tcp:20.0.30.25:6642 external-ids:ovn-encap-type=geneve external-ids:ovn-encap-ip=20.0.30.25
                                                       
systemctl restart ovn-controller                                            
                                                                
                           
ovn-nbctl lr-add lr1                               
ovn-nbctl lrp-add lr1 lr1ls1 00:01:03:0d:ff:01 192.168.1.254/24 2000::a/64
ovn-nbctl lrp-add lr1 lr1ls2 00:01:03:0d:ff:02 192.168.2.254/24 2001::a/64
                                                                
ovn-nbctl set logical_router lr1 options:chassis=hv0    
                                                                    
ovn-nbctl ls-add ls2                                   
ovn-nbctl lsp-add ls2 ls2lr1
ovn-nbctl lsp-set-type ls2lr1 router                            
ovn-nbctl lsp-set-options ls2lr1 router-port=lr1ls2
ovn-nbctl lsp-set-addresses ls2lr1 "00:01:03:0d:ff:02 192.168.2.254 2001::a"
ovn-nbctl lsp-add ls2 ls2p1                           
ovn-nbctl lsp-set-addresses ls2p1 00:01:02:03:02:01 
ovs-vsctl add-port br-int vm5 -- set interface vm5 type=internal                                                                                                                                           
ip netns add server0                                                                                                                                                                                       
ip link set vm5 netns server0                                                               
ip netns exec server0 ip link set vm5 up                                                    
ip netns exec server0 ip link set lo up
ip netns exec server0 ip link set vm5 address 00:01:02:03:02:01
ip netns exec server0 ip addr add 192.168.2.1/24 dev vm5
ip netns exec server0 ip route add default via 192.168.2.254 dev vm5
ovs-vsctl set interface vm5 external_ids:iface-id=ls2p1

ovn-nbctl ls-add ls1
ovn-nbctl lsp-add ls1 ls1lr1
ovn-nbctl lsp-set-type ls1lr1 router
ovn-nbctl lsp-set-options ls1lr1 router-port=lr1ls1
ovn-nbctl lsp-set-addresses ls1lr1 "00:01:03:0d:ff:01 192.168.1.254 2000::a"

ovn-nbctl lsp-add ls1 ls1p1
ovn-nbctl lsp-set-addresses ls1p1 00:01:02:03:01:01

ovn-nbctl lsp-add ls1 ls1p2
ovn-nbctl lsp-set-addresses ls1p2 00:01:02:03:01:02

ovn-nbctl lsp-add ls1 ls1p3
ovn-nbctl lsp-set-addresses ls1p3 00:01:02:03:01:03

ovs-vsctl add-port br-int vm1 -- set interface vm1 type=internal
ip netns add client0
ip link set vm1 netns client0
ip netns exec client0 ip link set vm1 up
ip netns exec client0 ip link set lo up
ip netns exec client0 ip link set vm1 address 00:01:02:03:01:01
ip netns exec client0 ip addr add 192.168.1.1/24 dev vm1
ip netns exec client0 ip route add default via 192.168.1.254 dev vm1
ovs-vsctl set interface vm1 external_ids:iface-id=ls1p1

ovs-vsctl add-port br-int vm2 -- set interface vm2 type=internal
ip netns add client1
ip link set vm2 netns client1
ip netns exec client1 ip link set lo up
ip netns exec client1 ip link set vm2 up
ip netns exec client1 ip link set vm2 address 00:01:02:03:01:02
ip netns exec client1 ip addr add 192.168.1.2/24 dev vm2
ip netns exec client1 ip route add default via 192.168.1.254 dev vm2
ovs-vsctl set interface vm2 external_ids:iface-id=ls1p2

ovs-vsctl add-port br-int vm3 -- set interface vm3 type=internal
ip netns add client2
ip link set vm3 netns client2
ip netns exec client2 ip link set lo up
ip netns exec client2 ip link set vm3 up
ip netns exec client2 ip link set vm3 address 00:01:02:03:01:03
ip netns exec client2 ip addr add 192.168.1.3/24 dev vm3
ip netns exec client2 ip route add default via 192.168.1.254 dev vm3
ovs-vsctl set interface vm3 external_ids:iface-id=ls1p3

ovn-nbctl lb-add lb0 30.0.0.1:80 192.168.1.1:80,192.168.1.2:80
#ovn-nbctl lr-lb-add lr1 lb0

uuid=`ovn-nbctl lb-list | grep lb0 | awk '{print $1}'`
ovn-nbctl set logical_switch ls1 load_balancer=$uuid
uuid3=`ovn-nbctl --id=@hc1 create Load_Balancer_Health_Check vip="30.0.0.1\:80" -- add Load_Balancer $uuid health_check @hc1`                                                                              
ovn-nbctl set Load_Balancer_Health_Check $uuid3 options:interval=5 options:timeout=20 options:success_count=3 options:failure_count=3                                                                      
ovn-nbctl --wait=sb set load_balancer $uuid ip_port_mappings:192.168.1.1=ls1p1:192.168.1.254
ovn-nbctl --wait=sb set load_balancer $uuid ip_port_mappings:192.168.1.2=ls1p2:192.168.1.254

Actual results:
status is [] in output of ovn-sbctl list service_monitor

Expected results:
status is offline

Additional info:


[root@dell-per740-12 ~]# ovn-sbctl list service_monitor
_uuid               : 202dfad1-66ab-49a5-a9bd-ab0e3461166d
external_ids        : {}
ip                  : "192.168.1.2"
logical_port        : ls1p2
options             : {failure_count="3", interval="5", success_count="3", timeout="20"}
port                : 80
protocol            : tcp
src_ip              : "192.168.1.254"
src_mac             : "ee:1c:a1:30:51:5b"
status              : []

_uuid               : 12edf7ae-5727-4ea1-80da-8c875466e941
external_ids        : {}
ip                  : "192.168.1.1"
logical_port        : ls1p1
options             : {failure_count="3", interval="5", success_count="3", timeout="20"}
port                : 80
protocol            : tcp
src_ip              : "192.168.1.254"
src_mac             : "ee:1c:a1:30:51:5b"
status              : []

<=== status is []

[root@dell-per740-12 ~]# rpm -qa | grep -E "openvswitch|ovn"
ovn2.12-central-2.12.0-27.el7fdp.x86_64
ovn2.12-2.12.0-27.el7fdp.x86_64
openvswitch2.12-2.12.0-21.el7fdp.x86_64
ovn2.12-host-2.12.0-27.el7fdp.x86_64
openvswitch-selinux-extra-policy-1.0-14.el7fdp.noarch

--- Additional comment from Numan Siddique on 2020-04-17 06:54:30 UTC ---

Submitted the patch for review - https://patchwork.ozlabs.org/project/openvswitch/patch/20200417065022.968218-1-numans@ovn.org/

Comment 4 ying xu 2020-05-06 03:04:36 UTC
this bug can be reproduced on version:
# rpm -qa|grep ovn
ovn2.11-central-2.11.1-37.el7fdp.x86_64
ovn2.11-2.11.1-37.el7fdp.x86_64
ovn2.11-host-2.11.1-37.el7fdp.x86_64

:: [ 22:30:39 ] :: [  BEGIN   ] :: Running 'ovn-sbctl list service_monitor'
_uuid               : 23c4fe1a-e4c9-44dd-a936-766338bde77b
external_ids        : {}
ip                  : "192.168.0.1"
logical_port        : "ls1p1"
options             : {failure_count="3", interval="5", success_count="3", timeout="20"}
port                : 12345
protocol            : udp
src_ip              : "192.168.0.254"
src_mac             : "42:21:fb:e7:ea:15"
status              : []


verified on version:
# rpm -qa|grep ovn
ovn2.11-central-2.11.1-44.el7fdp.x86_64
ovn2.11-2.11.1-44.el7fdp.x86_64
ovn2.11-host-2.11.1-44.el7fdp.x86_64

:: [ 22:37:11 ] :: [  BEGIN   ] :: Running 'ovn-sbctl list service_monitor'
_uuid               : 212d036f-4201-4715-98d2-aaa4ec08af31
external_ids        : {}
ip                  : "192.168.0.1"
logical_port        : "ls1p1"
options             : {failure_count="3", interval="5", success_count="3", timeout="20"}
port                : 12345
protocol            : udp
src_ip              : "192.168.0.254"
src_mac             : "2e:93:e4:d5:4d:0d"
status              : offline
:: [ 22:38:26 ] :: [  BEGIN   ] :: Running 'ovn-sbctl list service_monitor'
_uuid               : 212d036f-4201-4715-98d2-aaa4ec08af31
external_ids        : {}
ip                  : "192.168.0.1"
logical_port        : "ls1p1"
options             : {failure_count="3", interval="5", success_count="3", timeout="20"}
port                : 12345
protocol            : udp
src_ip              : "192.168.0.254"
src_mac             : "2e:93:e4:d5:4d:0d"
status              : online

Comment 6 errata-xmlrpc 2020-05-26 14:07:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2318


Note You need to log in before you can comment on or make changes to this bug.