Bug 2004286 - [RFE] DPDK memory pool tracking and configuration using MTU sizes and entries
Summary: [RFE] DPDK memory pool tracking and configuration using MTU sizes and entries
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openvswitch
Version: 17.0 (Wallaby)
Hardware: All
OS: Linux
medium
low
Target Milestone: z4
: 17.1
Assignee: Haresh Khandelwal
QA Contact: Eran Kuris
URL:
Whiteboard:
Depends On: 2079891
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-09-14 22:18 UTC by Ben Roose
Modified: 2024-11-21 09:38 UTC (History)
22 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-11-21 09:38:06 UTC
Target Upstream Version:
Embargoed:
gurpsing: needinfo-
gurpsing: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 880290 0 None ABANDONED Configure dpdk shared memory pool. 2024-11-13 13:10:07 UTC
OpenStack gerrit 880330 0 None ABANDONED Configure dpdk shared memory pool. 2024-11-13 13:10:07 UTC
Red Hat Issue Tracker NFV-2310 0 None None None 2021-10-25 14:04:49 UTC
Red Hat Issue Tracker OSP-9616 0 None None None 2021-11-15 12:49:49 UTC
Red Hat Product Errata RHBA-2024:9974 0 None None None 2024-11-21 09:38:09 UTC

Description Ben Roose 2021-09-14 22:18:27 UTC
Description of problem:

OvsDpdkSocketMemory gives the user the ability to allocate a size of available memory for DPDK memory pools, which involves a complex manual calculation process related to MTU sizes used in the user's environment. However, OpenStack allows OVS ports to be configured with various MTU sizes yelding many memory pools created by internal OVS algorithms, yet gives the user little control as to how many entries per memory pool are allocated. Depending upon the environment and parameter configuration, a lot of memory may be allocated for DPDK memory pools which will never be consumed. According to our customer, they are having to overcommit memory which is then underutilized by VMs, and they feel the memory pool calculations should not be dependent upon MTU sizes. The customer also stated the example in Red Hat's formal documentation for OvsDpdkSocketMemory, which assumes 64 queues with 4K entries for each port, is likely not realistic [1].

The customer has asked for the following features:

1. Can we add a method into openvswitch to track how much of the DPDK memory pool was used (max usage) during testing?

2. Can the DPDK memory pool configuration be simplified by manually setting MTU sizes and entries for each pool? For example, in this customer's case, they would like to configure one memory pool with 9100 MTU size for 2 queues with 4K entries (plus an extra 10% for overhead) on NUMA node 1, and a second memory pool for VMs (vhost user) 9050 MTU size with up to 2 queues with 4K entries (following NIC configuration) on both NUMA nodes.

Reference:
[1] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/network_functions_virtualization_planning_and_configuration_guide/assembly_ovsdpdk_parameters

Comment 17 Eelco Chaudron 2022-07-18 07:58:30 UTC
Hi Florin,

The following two commits were added to OVS 3.0:

https://github.com/openvswitch/ovs/commit/0dd409c2a2ba
https://github.com/openvswitch/ovs/commit/3757e9f8e9c3

Once OVS 3.0 is released and added to the FDP these enhancements will become available.

Comment 46 errata-xmlrpc 2024-11-21 09:38:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHOSP 17.1.4 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:9974


Note You need to log in before you can comment on or make changes to this bug.