This feature is meant to be used only in OSP16.x, while OVS does not support the offload of "meter" actions. Referred BZs: - https://bugzilla.redhat.com/show_bug.cgi?id=2002406: "meter action is not offloaded" - https://bugzilla.redhat.com/show_bug.cgi?id=1969999: "Add support for direct ports with QoS in OVS" The goal of this RFE is to implement QoS traffic shaping in OVS with HW offload ("direct" ports) using "ip" commands. QoS rules to be supported with this feature: - Maximum bandwidth limit, egress direction.
*** Bug 2014305 has been marked as a duplicate of this bug. ***
@ralonsoh is there a 16.1 bug to track this work? Verizon will require this fix for 16.1.8.
Hello Karrar: I've created [1]. Once we have the 16.2 code tested and verified, I'll backport this RFE to 16.1. Regards. [1]https://bugzilla.redhat.com/show_bug.cgi?id=2024692
According to our records, this should be resolved by openstack-neutron-15.3.5-2.20220113150031.el8ost. This build is available now.
Successfully verified. OpenStack version: (overcloud) [stack@undercloud-0 ~]$ printf "$(<core_puddle_version)\n" RHOS-16.2-RHEL-8-20220610.n.1 Port with no qos policy set: (overcloud) [stack@undercloud-0 ~]$ openstack port show tempest-port-smoke-1464224446 -c admin_state_up -c binding_host_id -c binding_profile -c binding_vif_details -c binding_vif_type -c binding_vnic_type -c qos_policy_id +---------------------+-------------------------------------------------------------------------------------------------------------------+ | Field | Value | +---------------------+-------------------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | binding_host_id | computehwoffload-1.redhat.local | | binding_profile | capabilities='['switchdev']', pci_slot='0000:04:02.4', pci_vendor_info='15b3:1018', physical_network='mx-network' | | binding_vif_details | bridge_name='br-int', connectivity='l2', datapath_type='system', ovs_hybrid_plug='False', port_filter='True' | | binding_vif_type | ovs | | binding_vnic_type | direct | | qos_policy_id | None | +---------------------+-------------------------------------------------------------------------------------------------------------------+ Create a network qos policy to limit egress bandwidth to 999999 kbps: (overcloud) [stack@undercloud-0 ~]$ openstack network qos policy create bw-limiter-999999 +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | description | | | id | 0348baa0-6170-4a4d-84de-47a79eac26b6 | | is_default | False | | location | cloud='', project.domain_id=, project.domain_name='Default', project.id='abbc5d1189234e19a039e6787ce36db6', project.name='admin', region_name='regionOne', zone= | | name | bw-limiter-999999 | | project_id | abbc5d1189234e19a039e6787ce36db6 | | rules | [] | | shared | False | | tags | [] | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ openstack network qos rule create --max-kbps 999999 --egress bw-limiter-999999 --type bandwidth-limit +----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | direction | egress | | id | c0d6f47d-18db-41aa-8fd0-9d608a6feb7f | | location | cloud='', project.domain_id=, project.domain_name='Default', project.id='abbc5d1189234e19a039e6787ce36db6', project.name='admin', region_name='regionOne', zone= | | max_burst_kbps | 0 | | max_kbps | 999999 | | name | None | | project_id | | +----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Set policy qos to port: (overcloud) [stack@undercloud-0 ~]$ openstack port set --qos-policy bw-limiter-999999 tempest-port-smoke-1464224446 (venv) (overcloud) [stack@undercloud-0 ~]$ openstack port show tempest-port-smoke-1464224446 -c admin_state_up -c binding_host_id -c binding_profile -c binding_vif_details -c binding_vif_type -c binding_vnic_type -c qos_policy_id +---------------------+-------------------------------------------------------------------------------------------------------------------+ | Field | Value | +---------------------+-------------------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | binding_host_id | computehwoffload-1.redhat.local | | binding_profile | capabilities='['switchdev']', pci_slot='0000:04:02.4', pci_vendor_info='15b3:1018', physical_network='mx-network' | | binding_vif_details | bridge_name='br-int', connectivity='l2', datapath_type='system', ovs_hybrid_plug='False', port_filter='True' | | binding_vif_type | ovs | | binding_vnic_type | direct | | qos_policy_id | 0348baa0-6170-4a4d-84de-47a79eac26b6 | +---------------------+-------------------------------------------------------------------------------------------------------------------+ Now, on the compute node the maximum transmit bandwidth (max_tx_rate) for the corresponding VF (enp4s0f1 vf 8) is set to 999 Mbps: [root@computehwoffload-1 ~]# ip -details link show enp4s0f1 50: enp4s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master mx-bond state UP mode DEFAULT group default qlen 1000 link/ether 98:03:9b:9d:73:74 brd ff:ff:ff:ff:ff:ff permaddr 98:03:9b:9d:73:75 promiscuity 0 minmtu 68 maxmtu 9978 bond_slave state BACKUP mii_status UP link_failure_count 0 perm_hwaddr 98:03:9b:9d:73:75 queue_id 0 addrgenmode eui64 numtxqueues 320 numrxqueues 40 gso_max_size 65536 gso_max_segs 65535 portname p1 switchid 74739d00039b0398 vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 4 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 5 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 6 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 7 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 8 link/ether fa:16:3e:2a:d2:7f brd ff:ff:ff:ff:ff:ff, tx rate 999 (Mbps), max_tx_rate 999Mbps, spoof checking off, link-state disable, trust off, query_rss off vf 9 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
*** Bug 2024432 has been marked as a duplicate of this bug. ***