Description of problem: OvsDpdkSocketMemory gives the user the ability to allocate a size of available memory for DPDK memory pools, which involves a complex manual calculation process related to MTU sizes used in the user's environment. However, OpenStack allows OVS ports to be configured with various MTU sizes yelding many memory pools created by internal OVS algorithms, yet gives the user little control as to how many entries per memory pool are allocated. Depending upon the environment and parameter configuration, a lot of memory may be allocated for DPDK memory pools which will never be consumed. According to our customer, they are having to overcommit memory which is then underutilized by VMs, and they feel the memory pool calculations should not be dependent upon MTU sizes. The customer also stated the example in Red Hat's formal documentation for OvsDpdkSocketMemory, which assumes 64 queues with 4K entries for each port, is likely not realistic [1]. The customer has asked for the following features: 1. Can we add a method into openvswitch to track how much of the DPDK memory pool was used (max usage) during testing? 2. Can the DPDK memory pool configuration be simplified by manually setting MTU sizes and entries for each pool? For example, in this customer's case, they would like to configure one memory pool with 9100 MTU size for 2 queues with 4K entries (plus an extra 10% for overhead) on NUMA node 1, and a second memory pool for VMs (vhost user) 9050 MTU size with up to 2 queues with 4K entries (following NIC configuration) on both NUMA nodes. Reference: [1] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/network_functions_virtualization_planning_and_configuration_guide/assembly_ovsdpdk_parameters
Hi Florin, The following two commits were added to OVS 3.0: https://github.com/openvswitch/ovs/commit/0dd409c2a2ba https://github.com/openvswitch/ovs/commit/3757e9f8e9c3 Once OVS 3.0 is released and added to the FDP these enhancements will become available.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHOSP 17.1.4 bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:9974