OVS doesn't work in multi-process mode, so no need to litter a filesystem and experience random crashes due to old memory chunks stored in re-opened files. So maybe it's better to start OVS-DPDK with the --in-memory flag.
*** This bug has been marked as a duplicate of bug 1949850 ***
This BZ is not really a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1949850 . The goals are different. Not limiting the memory is not related to using in-memory hugepages.
Hi Elco, I assume this is the default now as I won't see /dev/hugepages directory any more. AM I correct ? Thanks! Jean
(In reply to Jean-Tsung Hsiao from comment #7) > Hi Elco, > I assume this is the default now as I won't see /dev/hugepages directory any > more. > AM I correct ? /dev/hugepages/ is not managed by OVS, this is kernel-related. Also, nothing about this BZ has been worked on.
(In reply to Eelco Chaudron from comment #8) > /dev/hugepages/ is not managed by OVS, this is kernel-related. Also, > nothing about this BZ has been worked on. I meant to say, I did not work on this. You should ask Rosemarie, as this is part of upstream 2.17.
Hi Rosemarie, I have tested this RFE by running a ovs-dpdk script --- see attached below. And, got the following log line from ovs-vswitchd.log: dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0x000f --in-memory. Also, cat /proc/meminfo showed HugePages_Total: 64 HugePages_Free: 62 SO, I believe the RFE is working. But, I'll run a performance regression testing to make sure no significant regression. Please advise what else need to be done. Thanks! Jean A P2P OVS-dpdk script: { ovs-vsctl set Open_vSwitch . other_config={} ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x000f ###ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="4096,4096" ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xf000 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true ovs-vsctl --if-exists del-br ovsbr0 ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev ovs-vsctl add-port ovsbr0 dpdk-10 \ -- set interface dpdk-10 type=dpdk ofport_request=10 options:dpdk-devargs=0000:84:00.0 \ options:n_rxq=1 ovs-vsctl add-port ovsbr0 dpdk-11 \ -- set interface dpdk-11 type=dpdk ofport_request=11 options:dpdk-devargs=0000:84:00.1 \ options:n_rxq=1 ovs-ofctl del-flows ovsbr0 ovs-ofctl add-flow ovsbr0 in_port=10,actions=output:11 ovs-ofctl add-flow ovsbr0 in_port=11,actions=output:10 ovs-ofctl dump-flows ovsbr0
On another testbed: SUT: wsfd-advnetlab10.anl.lab.eng.bos.redhat.com/E810 <-> Trex Traffic generator: wsfd-advnetlab11.anl.lab.eng.bos.redhat.com/CX-5 Ran P2P test over OVS-dpdk using the script attached right below. Got 22.5 Mpps throughput. Note: Only one 1GB hugepage got allocated. HugePages_Total: 32 HugePages_Free: 31 EAL ARGS: ovs-vswitchd -c 0x00010000001000 --in-memory. If uncommented the following statement ###ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="4096,4096" Ran the same test again, got 22.5 Mpps throughput as before. Note: Eight 1GB hugepages got allocated in this case. HugePages_Total: 32 HugePages_Free: 24 EAL ARGS: ovs-vswitchd -c 0x00010000001000 --socket-mem 4096,4096 --in-memory. {{{ #!/bin/bash --- P2P over OVS-dpdk ovs-vsctl set Open_vSwitch . other_config={} ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x00010000001000 ###ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="4096,4096" ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x50000005000000 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true # config ovs-dpdk bridge with dpdk0, dpdk1, vhost0 and vhost1 ovs-vsctl --if-exists del-br ovsbr0 ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev ovs-vsctl add-port ovsbr0 dpdk-10 \ -- set interface dpdk-10 type=dpdk ofport_request=10 options:dpdk-devargs=0000:3b:00.0 \ options:n_rxq=1 ovs-vsctl add-port ovsbr0 dpdk-11 \ -- set interface dpdk-11 type=dpdk ofport_request=11 options:dpdk-devargs=0000:3b:00.1 \ options:n_rxq=1 ovs-ofctl del-flows ovsbr0 ovs-ofctl add-flow ovsbr0 in_port=10,actions=output:11 ovs-ofctl add-flow ovsbr0 in_port=11,actions=output:10 ovs-ofctl dump-flows ovsbr0 }}}
(In reply to Jean-Tsung Hsiao from comment #10) > Hi Rosemarie, > I have tested this RFE by running a ovs-dpdk script --- see attached below. > And, got the following log line from ovs-vswitchd.log: > > dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0x000f --in-memory. > > Also, cat /proc/meminfo showed > > HugePages_Total: 64 > HugePages_Free: 62 > > SO, I believe the RFE is working. But, I'll run a performance regression > testing to make sure > no significant regression. > > Please advise what else need to be done. > > Thanks! > Jean > > A P2P OVS-dpdk script: > { > ovs-vsctl set Open_vSwitch . other_config={} > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x000f > ###ovs-vsctl --no-wait set Open_vSwitch . > other_config:dpdk-socket-mem="4096,4096" > ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xf000 > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true > > ovs-vsctl --if-exists del-br ovsbr0 > ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev > ovs-vsctl add-port ovsbr0 dpdk-10 \ > -- set interface dpdk-10 type=dpdk ofport_request=10 > options:dpdk-devargs=0000:84:00.0 \ > options:n_rxq=1 > ovs-vsctl add-port ovsbr0 dpdk-11 \ > -- set interface dpdk-11 type=dpdk ofport_request=11 > options:dpdk-devargs=0000:84:00.1 \ > options:n_rxq=1 > > ovs-ofctl del-flows ovsbr0 > ovs-ofctl add-flow ovsbr0 in_port=10,actions=output:11 > ovs-ofctl add-flow ovsbr0 in_port=11,actions=output:10 > ovs-ofctl dump-flows ovsbr0 Hi! Thanks for the tests; they are sufficient. And in response to your earlier comment, as Eelco said, OVS will not create files in /dev/hugepages
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (openvswitch2.17 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:5445