Hide Forgot
Created attachment 1218425 [details] instance log Description of problem: Configurate the ovs dpdk funciton following the document https://access.redhat.com/documentation/en/red-hat-openstack-platform/9/single/configure-dpdk-for-openstack-networking. Then it has been done. ping the the different instance in the same compute node. the error is below: From 192.168.88.45 Destination Host unreachable From 192.168.88.45 Destination Host unreachable From 192.168.88.45 Destination Host unreachable In the instance log , the error is found as the below mentioned: 2016-11-06T13:58:52.758886Z qemu-kvm: unable to start vhost net: 1: falling back on userspace virtio Version-Release number of selected component (if applicable): openstack-ceilometer-common-6.1.3-2.el7ost.noarch openstack-ceilometer-compute-6.1.3-2.el7ost.noarch openstack-ceilometer-polling-6.1.3-2.el7ost.noarch openstack-neutron-8.1.2-4.el7ost.noarch openstack-neutron-common-8.1.2-4.el7ost.noarch openstack-neutron-openvswitch-8.1.2-4.el7ost.noarch openstack-nova-common-13.1.1-5.el7ost.noarch openstack-nova-compute-13.1.1-5.el7ost.noarch openstack-selinux-0.7.3-3.el7ost.noarch openstack-utils-2015.2-1.el7ost.noarch Red Hat Enterprise Linux Server release 7.2 (Maipo) How reproducible: 100% reproduced Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
as per latest update, it can be resolved in test env after new ovs-dpdk version upgrade in new repo channel. Hu Jun will test this resolution next week and update the status.
I validated that the issue has been resolved by upgrading ovs-dpdk to ovs-2.5 on site yesterday. but I think it has bug in openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64 and we need to find which bug it is at least, please trace this bug.
I had the same exact issue and an upgrade to ovs-dpdk-2.5 seems to have fixed it. But L2 forwarding (testpmd) latency test is not showing good results. It gave me around 2500µs when the traffic is routed through VM, where as if directly connect the flows between the physical interfaces, I would get some where around 6µs.
(In reply to Mohammed Salih from comment #10) > I had the same exact issue and an upgrade to ovs-dpdk-2.5 seems to have > fixed it. But L2 forwarding (testpmd) latency test is not showing good > results. It gave me around 2500µs when the traffic is routed through VM, > where as if directly connect the flows between the physical interfaces, I > would get some where around 6µs. By the way, I am using RT Kernel and KVM on host, with CPU pinned on 10-14 (node 1), humpages of 1G, 14 of them. VM is using 4 VCPU and 4 GB of RAM, uses uio_generic_pci for the interfaces. here is the testpmd options I've used. chrt -f 95 -l 1,2,3 --socket-mem 1024 -w 0000:00:03.0 -w 0000:00:04.0 -- -i --disable-hw-vlan --nb-cores=2 --auto-start
Update: so we got the test cases working well with in the target of 30µs ( <13µs for 512B payload & <6µs for 64B payload). Thanks a lot Zenghui, for the suggestion to use openvswitch-2.5.0-14.git20160727.el7fdb.x86_64.rpm from rhel-7-fast-datapath-htb-rpms and for sharing the doc written by Andrew Theurer <atheurer>. Both helped to nail it down. Cheers !
Could you update the Status?
Please checkout sos from upstream or get the RPM from RHEL7.3 and provide the sosreport from the affected systems to us after had reproduced the issue.