Hide Forgot
Description of problem: testpmd either connect the ports by pair either as by pair based on the port number, either as a chain. For instance, with ports 0, 1, 2 and 3: default (--port-topology=paired): 0 <-> 1 and 2 <-> 3 or chained (--port-topology=chained): 0 -> 1 -> 2 -> 3 and 3 -> 2 -> 1 -> 0 The goal of this RFE is to add complementary option to --port-topology=paired in order to specify how ports are paired: --port-pairs=0,2-1,3 The use case behind is to use testpmd as a vswitch, see slides 11 and 12 of https://dpdksummit.com/Archive/pdf/2016USA/Day02-Session14-FranckBaudin-DPDKUSASummit2016.pdf vhost-user host ports ceated by testpmd are the first one, for instance ports 0 and 1 for two ports. And then comes the physical ports. The goal is to implement a PVP topology (see the slides above), with another testpmd in the VM. Version-Release number of selected component (if applicable): master of DPDK, with a target of 16.11 inclusion.
My notes to deploy a PV topology: one interface VM, connected to a single physical port: this works well as there are only two ports. In the VM, I configured testpmd with --port-topology=chained so all input packets are sent back on the port 0 (only one port). Versions used: vanilla DPDK 16.07, RHEL 7.2 up to date (9th of September), both on hist and guest. On the host: sed -i -e '/CPUAffinity/d' /etc/systemd/system.conf echo "CPUAffinity=0-17,36-53" >> /etc/systemd/system.conf sed -i -e '/GRUB_CMDLINE_LINUX/d' /etc/default/grub echo GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap nomodeset console=tty0 console=ttyS0 nohz=on, nohz_full=18-35,54-71 rcu_nocbs=18-35,54-71 intel_pstate=disable nosoftlockup\" >> /etc/default/grub grub2-mkconfig > /boot/grub2/grub.cfg => reboot once umount /dev/hugepages mount -t hugetlbfs nodev /dev/hugepages -o pagesize=1G echo 10 > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages systemctl stop irqbalance MASK=ffff # CPU 0-16, all on socket 0 for I in `ls -d /proc/irq/[0-9]*` ; do echo $MASK > ${I}/smp_affinity ; done modprobe uio insmod /root/dev/dpdk/build/kmod/igb_uio.ko /root/dev/dpdk/tools/dpdk-devbind.py --bind=igb_uio 81:00.0 ./build/app/testpmd -l 18,19 -n 4 --socket-mem=1024,1024 --vdev eth_vhost0,iface=/tmp/vhost0.sock,queues=1 -- -i --socket-num=1 --nb-cores=2 Then start kvm: /usr/libexec/qemu-kvm -enable-kvm -hda /var/lib/libvirt/images/testpmd.qcow2 -no-reboot -nographic -echr 16 -smp 4 -m 2048 -cpu host -chardev socket,id=chr0,path=/tmp/vhost0.sock -netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 -device virtio-net-pci,netdev=net0 -object memory-backend-file,id=mem,size=2048M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc Then back on the host testpmd: set portmask 3 show config fwd set promisc all on start And on the guest, start the testpmd: systemctl stop irqbalance MASK=1 for I in `ls -d /proc/irq/[0-9]*` ; do echo $MASK > ${I}/smp_affinity ; done echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages modprobe uio cd dev/dpdk insmod build/kmod/igb_uio.ko ./tools/dpdk-devbind.py --bind=igb_uio 00:03.0 ./build/app/testpmd -l 2,3 -n 4 -- -i --port-topology=chained Note: the guest has also been pre-congigured as the host: all process pinned on CPU0, same boot parameters: sed -i -e '/GRUB_CMDLINE_LINUX/d' /etc/default/grub echo GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap nomodeset console=tty0 console=ttyS0 nohz=on, nohz_full=1-4 rcu_nocbs=1-4 intel_pstate=disable nosoftlockup\" >> /etc/default/grub grub2-mkconfig > /boot/grub2/grub.cfg sed -i -e '/CPUAffinity/d' /etc/systemd/system.conf echo "CPUAffinity=0" >> /etc/systemd/system.conf