RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1491909 - IP network can not recover after several vhost-user reconnect
Summary: IP network can not recover after several vhost-user reconnect
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Maxime Coquelin
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On:
Blocks: 1579716
TreeView+ depends on / blocked
 
Reported: 2017-09-15 02:39 UTC by Pei Zhang
Modified: 2018-05-18 07:47 UTC (History)
13 users (show)

Fixed In Version: qemu-kvm-rhev-2.10.0-12.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1579716 (view as bug list)
Environment:
Last Closed: 2018-04-11 00:33:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
script to boot OVS (1.49 KB, application/x-shellscript)
2017-09-15 02:39 UTC, Pei Zhang
no flags Details
VM XML file. (3.08 KB, text/html)
2017-09-15 02:41 UTC, Pei Zhang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:1104 0 normal SHIPPED_LIVE Important: qemu-kvm-rhev security, bug fix, and enhancement update 2018-04-10 22:54:38 UTC

Description Pei Zhang 2017-09-15 02:39:17 UTC
Created attachment 1326263 [details]
script to boot OVS

Description of problem:
Boot OVS as vhost-user client mode, then Boot VM as vhost-user server mode. In guest, set IP for the vhost-user network, then ping guest from another host. IP network can not recover after OVS restart(emulate vhost-user reconnect.) 

This should not qemu issue. As same qemu version with openvswitch-2.7.2-7.git20170719.el7fdp.x86_64 works well.

Version-Release number of selected component (if applicable):
openvswitch-2.8.0-1.el7fdb.x86_64
3.10.0-693.el7.x86_64
qemu-kvm-rhev-2.9.0-16.el7.x86_64
libvirt-3.7.0-2.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Boot OVS as vhost-user client in host1, full script please refer to attachment
# sh boot_ovs_client.sh

2. Boot VM as vhost-user server, full xml file please refer to next comment.

    <interface type='vhostuser'>
      <mac address='38:88:da:5f:dd:01'/>
      <source type='unix' path='/tmp/vhostuser0.sock' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

3. Set ip in VM
# ifconfig eth1 up
# ifconfig eth1 192.168.1.2/24

4. Start ping from another host2, ping works
# ifconfig p2p1 192.168.1.1/24
# ping 192.168.1.2

5. Restart OVS to emulate vhost-user reconnect
# sh boot_ovs_client.sh

6. Continue ping guest from host2, can not recover.
# ping 192.168.1.2
From 192.168.1.1 icmp_seq=935 Destination Host Unreachable

Actual results:
IP network can not recover.

Expected results:
IP network should recover.

Additional info:
1. This is a regression bug.
openvswitch-2.7.2-7.git20170719.el7fdp.x86_64  works well

2. Possibly this issue is related to dpdk[1]
[1]Bug 1491898 - In PVP testing, dpdk's testpmd will "Segmentation fault" after booting VM

Comment 3 Pei Zhang 2017-09-15 02:41:09 UTC
Created attachment 1326264 [details]
VM XML file.

Comment 4 Pei Zhang 2017-09-15 02:46:57 UTC
3. Additional info
# After re-start ovs, "net eth1: Unexpected TXQ (0) queue failure: -5" will repeat show in # dmesg
# dmesg
... 
[   92.339221] virtio_net virtio1: output.0:id 0 is not a head!
[   92.339652] net eth1: Unexpected TXQ (0) queue failure: -5
[   93.048195] net eth1: Unexpected TXQ (0) queue failure: -5
[   93.339195] net eth1: Unexpected TXQ (0) queue failure: -5
[   94.341178] net eth1: Unexpected TXQ (0) queue failure: -5
[   95.343173] net eth1: Unexpected TXQ (0) queue failure: -5
[   97.049156] net eth1: Unexpected TXQ (0) queue failure: -5
[   98.051158] net eth1: Unexpected TXQ (0) queue failure: -5
[   99.062141] net eth1: Unexpected TXQ (0) queue failure: -5
...

Comment 5 Eelco Chaudron 2017-10-24 13:04:13 UTC
I tried to replicate the issue, but I do not see it on my netdev servers.
Ping continues (with a few missing) during the run of your script.
These are my versions:

$ rpm -q openvswitch kernel qemu-kvm-rhev libvirt
openvswitch-2.8.0-1.el7fdb.x86_64
kernel-3.10.0-693.el7.x86_64
qemu-kvm-rhev-2.9.0-16.el7_4.8.x86_64
libvirt-3.2.0-14.el7_4.3.x86_64

I do see you have a newer version of libvirt, not sure where you got it, but it should not be a problem.

I also tried virtual to virtual machine, and it also works fine. As a VM host OS I use Centos. What do you use?

I can make my machines available for you to see if you can get it replicated, or if you have a failing setup I can use that to troubleshoot?

Comment 6 Pei Zhang 2017-11-06 10:30:42 UTC
(In reply to Eelco Chaudron from comment #5)
> I tried to replicate the issue, but I do not see it on my netdev servers.
> Ping continues (with a few missing) during the run of your script.
> These are my versions:
> 
> $ rpm -q openvswitch kernel qemu-kvm-rhev libvirt
> openvswitch-2.8.0-1.el7fdb.x86_64
> kernel-3.10.0-693.el7.x86_64
> qemu-kvm-rhev-2.9.0-16.el7_4.8.x86_64
> libvirt-3.2.0-14.el7_4.3.x86_64
> 
> I do see you have a newer version of libvirt, not sure where you got it, but
> it should not be a problem.
> 
> I also tried virtual to virtual machine, and it also works fine. As a VM
> host OS I use Centos. What do you use?
> 
> I can make my machines available for you to see if you can get it
> replicated, or if you have a failing setup I can use that to troubleshoot?

Hi Eelco,

Sorry for so late reply, as I was not in office last 2 weeks and I just got back to work today.


I still can reproduce this issue with openvswitch-2.8.0-3.el7fdb.x86_64. 


Note: This issue can be triggered after several(about 10 times) restarting ovs. 


I keep my testing environment, please log in, I'll add the hosts detail info in next Comment.


Best Regards,
Pei

Comment 8 Eelco Chaudron 2017-11-09 10:46:37 UTC
After some discussion with Maxime he can also replicate this with testpmd and qemu. He will take a look at this BZ, so will re-assign it to him, and changed the component to DPDK for now.

Comment 17 Maxime Coquelin 2017-12-05 09:35:05 UTC
Series merged upstream & posted downstream.
New brew build:
https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=14683959

Comment 18 Miroslav Rezanina 2017-12-11 09:45:22 UTC
Fix included in qemu-kvm-rhev-2.10.0-12.el7

Comment 19 Pei Zhang 2017-12-11 10:47:05 UTC
Verification:

Versions:
3.10.0-814.el7.x86_64
qemu-kvm-rhev-2.10.0-12.el7.x86_64
libvirt-3.9.0-5.el7.x86_64
openvswitch-2.8.0-4.el7fdb.x86_64
dpdk-17.11-1.el7fdb.x86_64

Steps:
Same with Description. Reconnect ovs 100 times, get PASS results. Guest network can always recover after each reconnect and no any error in guest.


So this bug has been fixed very well.

Comment 21 errata-xmlrpc 2018-04-11 00:33:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:1104


Note You need to log in before you can comment on or make changes to this bug.