Red Hat Bugzilla – Bug 1491909
IP network can not recover after several vhost-user reconnect
Last modified: 2018-05-18 03:47:10 EDT
Created attachment 1326263 [details] script to boot OVS Description of problem: Boot OVS as vhost-user client mode, then Boot VM as vhost-user server mode. In guest, set IP for the vhost-user network, then ping guest from another host. IP network can not recover after OVS restart(emulate vhost-user reconnect.) This should not qemu issue. As same qemu version with openvswitch-2.7.2-7.git20170719.el7fdp.x86_64 works well. Version-Release number of selected component (if applicable): openvswitch-2.8.0-1.el7fdb.x86_64 3.10.0-693.el7.x86_64 qemu-kvm-rhev-2.9.0-16.el7.x86_64 libvirt-3.7.0-2.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. Boot OVS as vhost-user client in host1, full script please refer to attachment # sh boot_ovs_client.sh 2. Boot VM as vhost-user server, full xml file please refer to next comment. <interface type='vhostuser'> <mac address='38:88:da:5f:dd:01'/> <source type='unix' path='/tmp/vhostuser0.sock' mode='server'/> <model type='virtio'/> <driver name='vhost'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> 3. Set ip in VM # ifconfig eth1 up # ifconfig eth1 192.168.1.2/24 4. Start ping from another host2, ping works # ifconfig p2p1 192.168.1.1/24 # ping 192.168.1.2 5. Restart OVS to emulate vhost-user reconnect # sh boot_ovs_client.sh 6. Continue ping guest from host2, can not recover. # ping 192.168.1.2 From 192.168.1.1 icmp_seq=935 Destination Host Unreachable Actual results: IP network can not recover. Expected results: IP network should recover. Additional info: 1. This is a regression bug. openvswitch-2.7.2-7.git20170719.el7fdp.x86_64 works well 2. Possibly this issue is related to dpdk[1] [1]Bug 1491898 - In PVP testing, dpdk's testpmd will "Segmentation fault" after booting VM
Created attachment 1326264 [details] VM XML file.
3. Additional info # After re-start ovs, "net eth1: Unexpected TXQ (0) queue failure: -5" will repeat show in # dmesg # dmesg ... [ 92.339221] virtio_net virtio1: output.0:id 0 is not a head! [ 92.339652] net eth1: Unexpected TXQ (0) queue failure: -5 [ 93.048195] net eth1: Unexpected TXQ (0) queue failure: -5 [ 93.339195] net eth1: Unexpected TXQ (0) queue failure: -5 [ 94.341178] net eth1: Unexpected TXQ (0) queue failure: -5 [ 95.343173] net eth1: Unexpected TXQ (0) queue failure: -5 [ 97.049156] net eth1: Unexpected TXQ (0) queue failure: -5 [ 98.051158] net eth1: Unexpected TXQ (0) queue failure: -5 [ 99.062141] net eth1: Unexpected TXQ (0) queue failure: -5 ...
I tried to replicate the issue, but I do not see it on my netdev servers. Ping continues (with a few missing) during the run of your script. These are my versions: $ rpm -q openvswitch kernel qemu-kvm-rhev libvirt openvswitch-2.8.0-1.el7fdb.x86_64 kernel-3.10.0-693.el7.x86_64 qemu-kvm-rhev-2.9.0-16.el7_4.8.x86_64 libvirt-3.2.0-14.el7_4.3.x86_64 I do see you have a newer version of libvirt, not sure where you got it, but it should not be a problem. I also tried virtual to virtual machine, and it also works fine. As a VM host OS I use Centos. What do you use? I can make my machines available for you to see if you can get it replicated, or if you have a failing setup I can use that to troubleshoot?
(In reply to Eelco Chaudron from comment #5) > I tried to replicate the issue, but I do not see it on my netdev servers. > Ping continues (with a few missing) during the run of your script. > These are my versions: > > $ rpm -q openvswitch kernel qemu-kvm-rhev libvirt > openvswitch-2.8.0-1.el7fdb.x86_64 > kernel-3.10.0-693.el7.x86_64 > qemu-kvm-rhev-2.9.0-16.el7_4.8.x86_64 > libvirt-3.2.0-14.el7_4.3.x86_64 > > I do see you have a newer version of libvirt, not sure where you got it, but > it should not be a problem. > > I also tried virtual to virtual machine, and it also works fine. As a VM > host OS I use Centos. What do you use? > > I can make my machines available for you to see if you can get it > replicated, or if you have a failing setup I can use that to troubleshoot? Hi Eelco, Sorry for so late reply, as I was not in office last 2 weeks and I just got back to work today. I still can reproduce this issue with openvswitch-2.8.0-3.el7fdb.x86_64. Note: This issue can be triggered after several(about 10 times) restarting ovs. I keep my testing environment, please log in, I'll add the hosts detail info in next Comment. Best Regards, Pei
After some discussion with Maxime he can also replicate this with testpmd and qemu. He will take a look at this BZ, so will re-assign it to him, and changed the component to DPDK for now.
Series merged upstream & posted downstream. New brew build: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=14683959
Fix included in qemu-kvm-rhev-2.10.0-12.el7
Verification: Versions: 3.10.0-814.el7.x86_64 qemu-kvm-rhev-2.10.0-12.el7.x86_64 libvirt-3.9.0-5.el7.x86_64 openvswitch-2.8.0-4.el7fdb.x86_64 dpdk-17.11-1.el7fdb.x86_64 Steps: Same with Description. Reconnect ovs 100 times, get PASS results. Guest network can always recover after each reconnect and no any error in guest. So this bug has been fixed very well.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:1104