Bug 1930942 - [OVN] QoS FIP rules not removed from NBDB
Summary: [OVN] QoS FIP rules not removed from NBDB
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-networking-ovn
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z6
: 16.1 (Train on RHEL 8.2)
Assignee: Rodolfo Alonso
QA Contact: Eduardo Olivares
URL:
Whiteboard:
Depends On:
Blocks: 1859274 1978158
TreeView+ depends on / blocked
 
Reported: 2021-02-19 20:21 UTC by Eduardo Olivares
Modified: 2022-10-03 14:44 UTC (History)
7 users (show)

Fixed In Version: python-networking-ovn-7.3.1-1.20201114024057.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1978158 (view as bug list)
Environment:
Last Closed: 2021-05-26 13:51:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1916470 0 None None None 2021-02-22 10:50:32 UTC
OpenStack gerrit 776916 0 None NEW [OVN][QoS] Remove OVN QoS rule when FIP is dissasociated 2021-02-22 10:50:32 UTC
Red Hat Issue Tracker OSP-475 0 None None None 2022-10-03 14:44:34 UTC
Red Hat Product Errata RHBA-2021:2097 0 None None None 2021-05-26 13:52:12 UTC

Description Eduardo Olivares 2021-02-19 20:21:33 UTC
Description of problem:
Run tempest test neutron_tempest_plugin.scenario.test_floatingip.FloatingIPQosTest.test_qos
That test creates one qos policy with an ingress and an egress bw limit rule, and assigns it to a FIP that is attached to a running server's port.


When the test ends, verify that the qos policy has been deleted with 'openstack network qos policy list'. If it was not deleted, do it with 'openstack network qos policy delete <policy-id>'
Check the floating ip was deleted too.

The OVN QoS table should be empty, but it is not. There are rules related to the policy and fip just removed:
# export NBDB=$(sudo ovs-vsctl get open . external_ids:ovn-remote | sed -e 's/\"//g' | sed -e 's/6642/6641/g')
# alias ovn-nbctl="sudo podman exec ovn_controller ovn-nbctl --db=$NBDB"
# ovn-nbctl list qos
_uuid               : 751d1460-8783-4561-a217-56153d67438c
action              : {}                                                                                                                                                                                                                     
bandwidth           : {burst=1000, rate=1000}                                  
direction           : to-lport                                                                                                                                                                                                               
external_ids        : {"neutron:fip_id"="cacb6cc6-7c2f-40e0-a99d-c11862974fd0"}
match               : "outport == \"b320bc62-c0a7-436c-81e3-1bbf2115b987\" && ip4.dst == 10.0.0.235 && is_chassis_resident(\"cr-lrp-b320bc62-c0a7-436c-81e3-1bbf2115b987\")"
priority            : 2002                                
_uuid               : caa9d983-faab-44da-bb5d-673122206197
action              : {}
bandwidth           : {burst=1000, rate=1000}
direction           : from-lport
external_ids        : {"neutron:fip_id"="cacb6cc6-7c2f-40e0-a99d-c11862974fd0"}
match               : "inport == \"b320bc62-c0a7-436c-81e3-1bbf2115b987\" && ip4.src == 10.0.0.235 && is_chassis_resident(\"cr-lrp-b320bc62-c0a7-436c-81e3-1bbf2115b987\")"
priority            : 2002







Version-Release number of selected component (if applicable):
RHOS-16.1-RHEL-8-20210216.n.1

How reproducible:
100%

Comment 1 Eduardo Olivares 2021-02-22 09:19:02 UTC
Manual reproduction procedure (without tempest)
IMPORTANT: Three different cleanup methods are shown below. A and B work fine (NBDB cleanup works fine). C is wrong. Apparently the issue happens when the server is removed first.




1) Create server, attach FIP to server and attach qos policy to FIP
$ openstack server create --image rhel8-pass --flavor rhel_flavor_1ram_1vpu_10disk --network heat_tempestconf_network --security-group sec_group vm1
$ openstack floating ip create nova
$ openstack server add floating ip vm1 10.0.0.229
$ openstack network qos policy list 
$ openstack network qos policy create bw-lim-pol
$ openstack network qos rule create bw-lim-pol --type bandwidth-limit --max-burst-kbits 100 --max-kbps 100 --egress 
$ openstack floating ip set --qos-policy bw-lim-pol 10.0.0.229

2) Check QoS rule has been added to OVN NBDB
[root@controller-0 ~]# ovn-nbctl list qos 0a0f9dcd-9fde-499a-8e97-681b978630e5
_uuid               : 0a0f9dcd-9fde-499a-8e97-681b978630e5
action              : {}
bandwidth           : {burst=100, rate=100}
direction           : from-lport
external_ids        : {"neutron:fip_id"="9db8355b-a1b4-4d96-a532-15b589ed8aa3"}
match               : "inport == \"2b47155d-dbfe-4d13-b499-0cf9f7a146f2\" && ip4.src == 10.0.0.229 && is_chassis_resident(\"cr-lrp-2b47155d-dbfe-4d13-b499-0cf9f7a146f2\")"                                                                  
priority            : 2002



3.A) Delete QoS rule, FIP and QoS Policy
$ openstack network qos rule delete bw-lim-pol 20c9beed-c139-4935-81e1-0223f6492645
Failed to delete Network QoS rule ID "20c9beed-c139-4935-81e1-0223f6492645": HttpException: 500: Server Error for url: http://10.0.0.135:9696/v2.0/qos/policies/87a77dd6-430b-4a7d-8ba8-63f42d1e7409/bandwidth_limit_rules/20c9beed-c139-4935-
81e1-0223f6492645, Request Failed: internal server error while processing your request.                                                                                                                                                      

The rule is deleted despite the error shown above (check this with: openstack network qos rule list bw-lim-pol)

$ openstack floating ip delete 10.0.0.229
$ openstack network qos policy delete bw-lim-pol

4.A) Check QoS rule has been successfully removed from OVN NBDB
[root@controller-0 ~]# ovn-nbctl list qos 0a0f9dcd-9fde-499a-8e97-681b978630e5
ovn-nbctl: no row "0a0f9dcd-9fde-499a-8e97-681b978630e5" in table QoS
Error: non zero exit code: 1: OCI runtime error


3.B) Delete FIP
$ openstack floating ip delete 10.0.0.229

4.B) Check QoS rule has been successfully removed from OVN NBDB



3.C) Delete server, FIP, QoS rule and QoS policy
$ openstack server delete vm1
$ openstack floating ip delete 10.0.0.222
$ openstack network qos rule list bw-lim-pol 
$ openstack network qos rule delete bw-lim-pol 10197de0-27e7-4d0b-a8f3-5cd01077f7e6
$ openstack network qos policy delete bw-lim-pol

4.C) Check QoS rule has NOT been removed from OVN NBDB
[root@controller-0 ~]# ovn-nbctl list qos a44ccbed-9428-456b-a849-844228473795
_uuid               : a44ccbed-9428-456b-a849-844228473795
action              : {}
bandwidth           : {burst=100, rate=100}
direction           : from-lport
external_ids        : {"neutron:fip_id"="7eac3ae7-bd07-4e8e-b383-94a8500e6c64"}
match               : "inport == \"2b47155d-dbfe-4d13-b499-0cf9f7a146f2\" && ip4.src == 10.0.0.222 && is_chassis_resident(\"cr-lrp-2b47155d-dbfe-4d13-b499-0cf9f7a146f2\")"                                                                  
priority            : 2002

Comment 2 Rodolfo Alonso 2021-02-22 10:50:33 UTC
Hello:

Thanks for the report and the reproducer. I think I've catch the error: OVN client is not handling the floating disassociation. When a FIP is disassociated, the QoS set in the OVN DB should be deleted (as reported in the BZ).

I've pushed [1] in U/S and opened [2].

Regards.

[1]https://review.opendev.org/c/openstack/neutron/+/776916
[2]https://bugs.launchpad.net/neutron/+bug/1916470

Comment 16 errata-xmlrpc 2021-05-26 13:51:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.6 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:2097


Note You need to log in before you can comment on or make changes to this bug.