Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 2004286

Summary: [RFE] DPDK memory pool tracking and configuration using MTU sizes and entries
Product: Red Hat OpenStack Reporter: Ben Roose <broose>
Component: openvswitchAssignee: Haresh Khandelwal <hakhande>
Status: CLOSED ERRATA QA Contact: Eran Kuris <ekuris>
Severity: low Docs Contact:
Priority: medium    
Version: 17.0 (Wallaby)CC: apevec, bcafarel, broose, chrisw, dmarchan, echaudro, fbaudin, fboboc, fleitner, gurpsing, hakhande, i.maximets, jfargen, ktraynor, mariel, mburns, mnietoji, njohnston, ovs-team, pgrist, ralonsoh, scohen
Target Milestone: z4Keywords: FutureFeature, Triaged
Target Release: 17.1Flags: gurpsing: needinfo-
gurpsing: needinfo+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-11-21 09:38:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2079891    
Bug Blocks:    

Description Ben Roose 2021-09-14 22:18:27 UTC
Description of problem:

OvsDpdkSocketMemory gives the user the ability to allocate a size of available memory for DPDK memory pools, which involves a complex manual calculation process related to MTU sizes used in the user's environment. However, OpenStack allows OVS ports to be configured with various MTU sizes yelding many memory pools created by internal OVS algorithms, yet gives the user little control as to how many entries per memory pool are allocated. Depending upon the environment and parameter configuration, a lot of memory may be allocated for DPDK memory pools which will never be consumed. According to our customer, they are having to overcommit memory which is then underutilized by VMs, and they feel the memory pool calculations should not be dependent upon MTU sizes. The customer also stated the example in Red Hat's formal documentation for OvsDpdkSocketMemory, which assumes 64 queues with 4K entries for each port, is likely not realistic [1].

The customer has asked for the following features:

1. Can we add a method into openvswitch to track how much of the DPDK memory pool was used (max usage) during testing?

2. Can the DPDK memory pool configuration be simplified by manually setting MTU sizes and entries for each pool? For example, in this customer's case, they would like to configure one memory pool with 9100 MTU size for 2 queues with 4K entries (plus an extra 10% for overhead) on NUMA node 1, and a second memory pool for VMs (vhost user) 9050 MTU size with up to 2 queues with 4K entries (following NIC configuration) on both NUMA nodes.

Reference:
[1] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/network_functions_virtualization_planning_and_configuration_guide/assembly_ovsdpdk_parameters

Comment 17 Eelco Chaudron 2022-07-18 07:58:30 UTC
Hi Florin,

The following two commits were added to OVS 3.0:

https://github.com/openvswitch/ovs/commit/0dd409c2a2ba
https://github.com/openvswitch/ovs/commit/3757e9f8e9c3

Once OVS 3.0 is released and added to the FDP these enhancements will become available.

Comment 46 errata-xmlrpc 2024-11-21 09:38:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHOSP 17.1.4 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:9974